mirror of
https://github.com/kubeshark/kubeshark.git
synced 2026-02-20 13:00:02 +00:00
Compare commits
23 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0840642c98 | ||
|
|
d5b01347df | ||
|
|
7dca1ad889 | ||
|
|
616eccb2cf | ||
|
|
30f07479cb | ||
|
|
7f880417e9 | ||
|
|
6b52458642 | ||
|
|
858a64687d | ||
|
|
819ccf54cd | ||
|
|
7cc077c8a0 | ||
|
|
fae5f22d25 | ||
|
|
eba7a3b476 | ||
|
|
cf231538f4 | ||
|
|
073b0b72d3 | ||
|
|
c8705822b3 | ||
|
|
d4436d9f15 | ||
|
|
4e0ff74944 | ||
|
|
366c1d0c6c | ||
|
|
17fa163ee3 | ||
|
|
3644fdb533 | ||
|
|
ab7c4e72c6 | ||
|
|
e25e7925b6 | ||
|
|
80237c8090 |
@@ -2,7 +2,7 @@
|
||||
.dockerignore
|
||||
.editorconfig
|
||||
.gitignore
|
||||
.env.*
|
||||
**/.env*
|
||||
Dockerfile
|
||||
Makefile
|
||||
LICENSE
|
||||
|
||||
76
.github/CODE_OF_CONDUCT.md
vendored
Normal file
76
.github/CODE_OF_CONDUCT.md
vendored
Normal file
@@ -0,0 +1,76 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as
|
||||
contributors and maintainers pledge to making participation in our project and
|
||||
our community a harassment-free experience for everyone, regardless of age, body
|
||||
size, disability, ethnicity, sex characteristics, gender identity and expression,
|
||||
level of experience, education, socio-economic status, nationality, personal
|
||||
appearance, race, religion, or sexual identity and orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment
|
||||
include:
|
||||
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
* Gracefully accepting constructive criticism
|
||||
* Focusing on what is best for the community
|
||||
* Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or
|
||||
advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic
|
||||
address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Our Responsibilities
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable
|
||||
behavior and are expected to take appropriate and fair corrective action in
|
||||
response to any instances of unacceptable behavior.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or
|
||||
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct, or to ban temporarily or
|
||||
permanently any contributor for other behaviors that they deem inappropriate,
|
||||
threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community. Examples of
|
||||
representing a project or community include using an official project e-mail
|
||||
address, posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event. Representation of a project may be
|
||||
further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported by contacting the project team at mizu@up9.com. All
|
||||
complaints will be reviewed and investigated and will result in a response that
|
||||
is deemed necessary and appropriate to the circumstances. The project team is
|
||||
obligated to maintain confidentiality with regard to the reporter of an incident.
|
||||
Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good
|
||||
faith may face temporary or permanent repercussions as determined by other
|
||||
members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
||||
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see
|
||||
https://www.contributor-covenant.org/faq
|
||||
16
CONTRIBUTE.md → .github/CONTRIBUTING.md
vendored
16
CONTRIBUTE.md → .github/CONTRIBUTING.md
vendored
@@ -1,18 +1,20 @@
|
||||

|
||||
# CONTRIBUTE
|
||||
|
||||
# Contributing to Mizu
|
||||
|
||||
We welcome code contributions from the community.
|
||||
Please read and follow the guidelines below.
|
||||
|
||||
## Communication
|
||||
|
||||
* Before starting work on a major feature, please reach out to us via [GitHub](https://github.com/up9inc/mizu), [Slack](https://join.slack.com/share/zt-u6bbs3pg-X1zhQOXOH0yEoqILgH~csw), [email](mailto:mizu@up9.com), etc. We will make sure no one else is already working on it. A _major feature_ is defined as any change that is > 100 LOC altered (not including tests), or changes any user-facing behavior
|
||||
* Small patches and bug fixes don't need prior communication.
|
||||
|
||||
## Contribution requirements
|
||||
## Contribution Requirements
|
||||
|
||||
* Code style - most of the code is written in Go, please follow [these guidelines](https://golang.org/doc/effective_go)
|
||||
* Go-tools compatible (`go get`, `go test`, etc)
|
||||
* Unit-test coverage can’t go down ..
|
||||
* Go-tools compatible (`go get`, `go test`, etc.)
|
||||
* Code coverage for unit tests must not decrease.
|
||||
* Code must be usefully commented. Not only for developers on the project, but also for external users of these packages
|
||||
* When reviewing PRs, you are encouraged to use Golang's [code review comments page](https://github.com/golang/go/wiki/CodeReviewComments)
|
||||
|
||||
|
||||
|
||||
* Project follows [Google JSON Style Guide](https://google.github.io/styleguide/jsoncstyleguide.xml) for the REST APIs that are provided.
|
||||
33
.github/TESTING.md
vendored
Normal file
33
.github/TESTING.md
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||

|
||||
# Testing guidelines
|
||||
|
||||
## Generic guidelines
|
||||
* Use "[testing](https://pkg.go.dev/testing)" package
|
||||
* Write [Table-driven tests using subtests](https://go.dev/blog/subtests)
|
||||
* Use cleanup in test/subtest in order to clean up resources
|
||||
* Name the test func "Test<tested_func_name><tested_case>"
|
||||
|
||||
## Unit tests
|
||||
* Position the test file inside the folder of the tested package
|
||||
* In case of internal func testing
|
||||
* Name the test file "<tested_file_name>_internal_test.go"
|
||||
* Name the test package same as the package being tested
|
||||
* Example - [Config](../cli/config/config_internal_test.go)
|
||||
* In case of exported func testing
|
||||
* Name the test file "<tested_file_name>_test.go"
|
||||
* Name the test package "<tested_package>_test"
|
||||
* Example - [Slice Utils](../cli/mizu/sliceUtils_test.go)
|
||||
* Make sure to run test coverage to make sure you covered all the cases and lines in the func
|
||||
|
||||
## Acceptance tests
|
||||
* Position the test file inside the [acceptance tests folder](../acceptanceTests)
|
||||
* Name the file "<tested_command>_test.go"
|
||||
* Name the package "acceptanceTests"
|
||||
* Do not run as part of the short tests
|
||||
* Use/Create generic test utils func in acceptanceTests/testsUtils
|
||||
* Don't use sleep inside the tests - active check
|
||||
* Running acceptance tests locally
|
||||
* Switch to the branch that is being tested
|
||||
* Run acceptanceTests/setup.sh
|
||||
* Run tests (make acceptance-test)
|
||||
* Example - [Tap](../acceptanceTests/tap_test.go)
|
||||
4
.github/workflows/pr_validation.yml
vendored
4
.github/workflows/pr_validation.yml
vendored
@@ -18,7 +18,7 @@ jobs:
|
||||
- name: Set up Go 1.16
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '^1.16'
|
||||
go-version: '1.16'
|
||||
|
||||
- name: Check out code into the Go module directory
|
||||
uses: actions/checkout@v2
|
||||
@@ -33,7 +33,7 @@ jobs:
|
||||
- name: Set up Go 1.16
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '^1.16'
|
||||
go-version: '1.16'
|
||||
|
||||
- name: Check out code into the Go module directory
|
||||
uses: actions/checkout@v2
|
||||
|
||||
10
.gitignore
vendored
10
.gitignore
vendored
@@ -19,3 +19,13 @@ build
|
||||
|
||||
# Mac OS
|
||||
.DS_Store
|
||||
.vscode/
|
||||
|
||||
# Ignore the scripts that are created for development
|
||||
*dev.*
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# pprof
|
||||
pprof/*
|
||||
|
||||
@@ -11,7 +11,7 @@ FROM golang:1.16-alpine AS builder
|
||||
# Set necessary environment variables needed for our image.
|
||||
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
|
||||
|
||||
RUN apk add libpcap-dev gcc g++ make
|
||||
RUN apk add libpcap-dev gcc g++ make bash
|
||||
|
||||
# Move to agent working directory (/agent-build).
|
||||
WORKDIR /app/agent-build
|
||||
@@ -19,6 +19,7 @@ WORKDIR /app/agent-build
|
||||
COPY agent/go.mod agent/go.sum ./
|
||||
COPY shared/go.mod shared/go.mod ../shared/
|
||||
COPY tap/go.mod tap/go.mod ../tap/
|
||||
COPY tap/api/go.* ../tap/api/
|
||||
RUN go mod download
|
||||
# cheap trick to make the build faster (As long as go.mod wasn't changes)
|
||||
RUN go list -f '{{.Path}}@{{.Version}}' -m all | sed 1d | grep -e 'go-cache' -e 'sqlite' | xargs go get
|
||||
@@ -38,6 +39,8 @@ RUN go build -ldflags="-s -w \
|
||||
-X 'mizuserver/pkg/version.BuildTimestamp=${BUILD_TIMESTAMP}' \
|
||||
-X 'mizuserver/pkg/version.SemVer=${SEM_VER}'" -o mizuagent .
|
||||
|
||||
COPY build_extensions.sh ..
|
||||
RUN cd .. && /bin/bash build_extensions.sh
|
||||
|
||||
FROM alpine:3.13.5
|
||||
|
||||
@@ -46,6 +49,7 @@ WORKDIR /app
|
||||
|
||||
# Copy binary and config files from /build to root folder of scratch container.
|
||||
COPY --from=builder ["/app/agent-build/mizuagent", "."]
|
||||
COPY --from=builder ["/app/agent/build/extensions", "extensions"]
|
||||
COPY --from=site-build ["/app/ui-build/build", "site"]
|
||||
|
||||
# gin-gonic runs in debug mode without this
|
||||
|
||||
6
Makefile
6
Makefile
@@ -23,7 +23,7 @@ export SEM_VER?=0.0.0
|
||||
|
||||
ui: ## Build UI.
|
||||
@(cd ui; npm i ; npm run build; )
|
||||
@ls -l ui/build
|
||||
@ls -l ui/build
|
||||
|
||||
cli: ## Build CLI.
|
||||
@echo "building cli"; cd cli && $(MAKE) build
|
||||
@@ -34,6 +34,7 @@ build-cli-ci: ## Build CLI for CI.
|
||||
agent: ## Build agent.
|
||||
@(echo "building mizu agent .." )
|
||||
@(cd agent; go build -o build/mizuagent main.go)
|
||||
${MAKE} extensions
|
||||
@ls -l agent/build
|
||||
|
||||
docker: ## Build and publish agent docker image.
|
||||
@@ -71,6 +72,9 @@ clean-cli: ## Clean CLI.
|
||||
clean-docker:
|
||||
@(echo "DOCKER cleanup - NOT IMPLEMENTED YET " )
|
||||
|
||||
extensions:
|
||||
./build_extensions.sh
|
||||
|
||||
test-cli:
|
||||
@echo "running cli tests"; cd cli && $(MAKE) test
|
||||
|
||||
|
||||
@@ -150,7 +150,6 @@ Web interface is now available at http://localhost:8899
|
||||
^C
|
||||
|
||||
```
|
||||
|
||||
Any request that contains `User-Agent` header with one of the specified values (`kube-probe` or `prometheus`) will not be captured
|
||||
|
||||
### API Rules validation
|
||||
|
||||
15
TESTING.md
15
TESTING.md
@@ -1,15 +0,0 @@
|
||||

|
||||
# TESTING
|
||||
Testing guidelines for Mizu project
|
||||
|
||||
## Unit-tests
|
||||
* TBD
|
||||
* TBD
|
||||
* TBD
|
||||
|
||||
|
||||
|
||||
## System tests
|
||||
* TBD
|
||||
* TBD
|
||||
* TBD
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os/exec"
|
||||
"strings"
|
||||
@@ -12,7 +11,7 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestTapAndFetch(t *testing.T) {
|
||||
func TestTap(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("ignored acceptance test")
|
||||
}
|
||||
@@ -71,7 +70,6 @@ func TestTapAndFetch(t *testing.T) {
|
||||
}
|
||||
|
||||
entries := requestResult.([]interface{})
|
||||
|
||||
if len(entries) == 0 {
|
||||
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
|
||||
}
|
||||
@@ -94,37 +92,6 @@ func TestTapAndFetch(t *testing.T) {
|
||||
t.Errorf("%v", err)
|
||||
return
|
||||
}
|
||||
|
||||
fetchCmdArgs := getDefaultFetchCommandArgs()
|
||||
fetchCmd := exec.Command(cliPath, fetchCmdArgs...)
|
||||
t.Logf("running command: %v", fetchCmd.String())
|
||||
|
||||
if err := fetchCmd.Start(); err != nil {
|
||||
t.Errorf("failed to start fetch command, err: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
harCheckFunc := func() error {
|
||||
harBytes, readFileErr := ioutil.ReadFile("./unknown_source.har")
|
||||
if readFileErr != nil {
|
||||
return fmt.Errorf("failed to read har file, err: %v", readFileErr)
|
||||
}
|
||||
|
||||
harEntries, err := getEntriesFromHarBytes(harBytes)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get entries from har, err: %v", err)
|
||||
}
|
||||
|
||||
if len(harEntries) == 0 {
|
||||
return fmt.Errorf("unexpected har entries result - Expected more than 0 entries")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
if err := retriesExecute(shortRetriesCount, harCheckFunc); err != nil {
|
||||
t.Errorf("%v", err)
|
||||
return
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -523,6 +490,10 @@ func TestTapRedact(t *testing.T) {
|
||||
}
|
||||
|
||||
entries := requestResult.([]interface{})
|
||||
if len(entries) == 0 {
|
||||
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
|
||||
}
|
||||
|
||||
firstEntry := entries[0].(map[string]interface{})
|
||||
|
||||
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
|
||||
@@ -625,6 +596,10 @@ func TestTapNoRedact(t *testing.T) {
|
||||
}
|
||||
|
||||
entries := requestResult.([]interface{})
|
||||
if len(entries) == 0 {
|
||||
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
|
||||
}
|
||||
|
||||
firstEntry := entries[0].(map[string]interface{})
|
||||
|
||||
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
|
||||
@@ -727,6 +702,10 @@ func TestTapRegexMasking(t *testing.T) {
|
||||
}
|
||||
|
||||
entries := requestResult.([]interface{})
|
||||
if len(entries) == 0 {
|
||||
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
|
||||
}
|
||||
|
||||
firstEntry := entries[0].(map[string]interface{})
|
||||
|
||||
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
|
||||
|
||||
@@ -76,13 +76,6 @@ func getDefaultTapNamespace() []string {
|
||||
return []string{"-n", "mizu-tests"}
|
||||
}
|
||||
|
||||
func getDefaultFetchCommandArgs() []string {
|
||||
fetchCommand := "fetch"
|
||||
defaultCmdArgs := getDefaultCommandArgs()
|
||||
|
||||
return append([]string{fetchCommand}, defaultCmdArgs...)
|
||||
}
|
||||
|
||||
func getDefaultConfigCommandArgs() []string {
|
||||
configCommand := "config"
|
||||
defaultCmdArgs := getDefaultCommandArgs()
|
||||
@@ -91,10 +84,10 @@ func getDefaultConfigCommandArgs() []string {
|
||||
}
|
||||
|
||||
func retriesExecute(retriesCount int, executeFunc func() error) error {
|
||||
var lastError error
|
||||
var lastError interface{}
|
||||
|
||||
for i := 0; i < retriesCount; i++ {
|
||||
if err := executeFunc(); err != nil {
|
||||
if err := tryExecuteFunc(executeFunc); err != nil {
|
||||
lastError = err
|
||||
|
||||
time.Sleep(1 * time.Second)
|
||||
@@ -107,6 +100,16 @@ func retriesExecute(retriesCount int, executeFunc func() error) error {
|
||||
return fmt.Errorf("reached max retries count, retries count: %v, last err: %v", retriesCount, lastError)
|
||||
}
|
||||
|
||||
func tryExecuteFunc(executeFunc func() error) (err interface{}) {
|
||||
defer func() {
|
||||
if panicErr := recover(); panicErr != nil {
|
||||
err = panicErr
|
||||
}
|
||||
}()
|
||||
|
||||
return executeFunc()
|
||||
}
|
||||
|
||||
func waitTapPodsReady(apiServerUrl string) error {
|
||||
resolvingUrl := fmt.Sprintf("%v/status/tappersCount", apiServerUrl)
|
||||
tapPodsReadyFunc := func() error {
|
||||
@@ -179,19 +182,6 @@ func cleanupCommand(cmd *exec.Cmd) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func getEntriesFromHarBytes(harBytes []byte) ([]interface{}, error) {
|
||||
harInterface, convertErr := jsonBytesToInterface(harBytes)
|
||||
if convertErr != nil {
|
||||
return nil, convertErr
|
||||
}
|
||||
|
||||
har := harInterface.(map[string]interface{})
|
||||
harLog := har["log"].(map[string]interface{})
|
||||
harEntries := harLog["entries"].([]interface{})
|
||||
|
||||
return harEntries, nil
|
||||
}
|
||||
|
||||
func getPods(tapStatusInterface interface{}) ([]map[string]interface{}, error) {
|
||||
tapStatus := tapStatusInterface.(map[string]interface{})
|
||||
podsInterface := tapStatus["pods"].([]interface{})
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# mizu agent
|
||||
Agent for MIZU (API server and tapper)
|
||||
Basic APIs:
|
||||
* /fetch - retrieve traffic data
|
||||
* /stats - retrieve statistics of collected data
|
||||
* /viewer - web ui
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ module mizuserver
|
||||
go 1.16
|
||||
|
||||
require (
|
||||
github.com/beevik/etree v1.1.0
|
||||
github.com/djherbis/atime v1.0.0
|
||||
github.com/fsnotify/fsnotify v1.4.9
|
||||
github.com/gin-contrib/static v0.0.1
|
||||
@@ -18,8 +17,9 @@ require (
|
||||
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7
|
||||
github.com/up9inc/mizu/shared v0.0.0
|
||||
github.com/up9inc/mizu/tap v0.0.0
|
||||
github.com/up9inc/mizu/tap/api v0.0.0
|
||||
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0
|
||||
go.mongodb.org/mongo-driver v1.5.1
|
||||
go.mongodb.org/mongo-driver v1.7.1
|
||||
gorm.io/driver/sqlite v1.1.4
|
||||
gorm.io/gorm v1.21.8
|
||||
k8s.io/api v0.21.0
|
||||
@@ -30,3 +30,5 @@ require (
|
||||
replace github.com/up9inc/mizu/shared v0.0.0 => ../shared
|
||||
|
||||
replace github.com/up9inc/mizu/tap v0.0.0 => ../tap
|
||||
|
||||
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../tap/api
|
||||
|
||||
19
agent/go.sum
19
agent/go.sum
@@ -42,9 +42,6 @@ github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb0
|
||||
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
|
||||
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
|
||||
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||
github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs=
|
||||
github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A=
|
||||
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4 h1:NJOOlc6ZJjix0A1rAU+nxruZtR8KboG1848yqpIUo4M=
|
||||
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4/go.mod h1:DQPxZS994Ld1Y8uwnJT+dRL04XPD0cElP/pHH/zEBHM=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
@@ -101,7 +98,6 @@ github.com/go-playground/validator/v10 v10.2.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GO
|
||||
github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4=
|
||||
github.com/go-playground/validator/v10 v10.5.0 h1:X9rflw/KmpACwT8zdrm1upefpvdy6ur8d1kWyq6sg3E=
|
||||
github.com/go-playground/validator/v10 v10.5.0/go.mod h1:xm76BBt941f7yWdGnI2DVPFFg1UK3YY04qifoXU3lOk=
|
||||
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gobuffalo/attrs v0.0.0-20190224210810-a9411de4debd/go.mod h1:4duuawTqi2wkkpB4ePgWMaai6/Kc6WEz83bhFwpHzj0=
|
||||
github.com/gobuffalo/depgen v0.0.0-20190329151759-d478694a28d3/go.mod h1:3STtPUQYuzV0gBVOY3vy6CfMm/ljR4pABfrTeHNLHUY=
|
||||
@@ -194,8 +190,6 @@ github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkr
|
||||
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
|
||||
github.com/jinzhu/now v1.1.2 h1:eVKgfIdy9b6zbWBMgFpfDPoAMifwSZagU9HmEU6zgiI=
|
||||
github.com/jinzhu/now v1.1.2/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
|
||||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
|
||||
github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
@@ -292,8 +286,8 @@ github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmv
|
||||
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.mongodb.org/mongo-driver v1.5.1 h1:9nOVLGDfOaZ9R0tBumx/BcuqkbFpyTCU2r/Po7A2azI=
|
||||
go.mongodb.org/mongo-driver v1.5.1/go.mod h1:gRXCHX4Jo7J0IJ1oDQyUxF7jfy19UfxniMS4xxMmUqw=
|
||||
go.mongodb.org/mongo-driver v1.7.1 h1:jwqTeEM3x6L9xDXrCxN0Hbg7vdGfPBOTIkr0+/LYZDA=
|
||||
go.mongodb.org/mongo-driver v1.7.1/go.mod h1:Q4oFMbo1+MSNqICAdYMlC/zSTrwCogR4R8NzkI+yfU8=
|
||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
@@ -362,9 +356,8 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
|
||||
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210421230115-4e50805a0758 h1:aEpZnXcAmXkd6AvLb2OPt+EN1Zu/8Ne3pCqPjja5PXY=
|
||||
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
@@ -410,9 +403,8 @@ golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073 h1:8qxJSnu+7dRq6upnbntrmriWByIakBuct5OM/MdQC1M=
|
||||
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe h1:WdX7u8s3yOigWAhHEaDl8r9G+4XwFQEQFtBMYyN+kXQ=
|
||||
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
|
||||
@@ -423,9 +415,8 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
|
||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
|
||||
184
agent/main.go
184
agent/main.go
@@ -4,21 +4,29 @@ import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"mizuserver/pkg/api"
|
||||
"mizuserver/pkg/controllers"
|
||||
"mizuserver/pkg/models"
|
||||
"mizuserver/pkg/routes"
|
||||
"mizuserver/pkg/utils"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"plugin"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-contrib/static"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/gorilla/websocket"
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
"github.com/up9inc/mizu/tap"
|
||||
"mizuserver/pkg/api"
|
||||
"mizuserver/pkg/models"
|
||||
"mizuserver/pkg/routes"
|
||||
"mizuserver/pkg/sensitiveDataFiltering"
|
||||
"mizuserver/pkg/utils"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
var tapperMode = flag.Bool("tap", false, "Run in tapper mode without API")
|
||||
@@ -29,25 +37,29 @@ var namespace = flag.String("namespace", "", "Resolve IPs if they belong to reso
|
||||
var harsReaderMode = flag.Bool("hars-read", false, "Run in hars-read mode")
|
||||
var harsDir = flag.String("hars-dir", "", "Directory to read hars from")
|
||||
|
||||
var extensions []*tapApi.Extension // global
|
||||
var extensionsMap map[string]*tapApi.Extension // global
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
loadExtensions()
|
||||
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
|
||||
tapOpts := &tap.TapOpts{HostMode: hostMode}
|
||||
|
||||
|
||||
if !*tapperMode && !*apiServerMode && !*standaloneMode && !*harsReaderMode{
|
||||
if !*tapperMode && !*apiServerMode && !*standaloneMode && !*harsReaderMode {
|
||||
panic("One of the flags --tap, --api or --standalone or --hars-read must be provided")
|
||||
}
|
||||
|
||||
if *standaloneMode {
|
||||
api.StartResolving(*namespace)
|
||||
|
||||
harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
|
||||
filteredHarChannel := make(chan *tap.OutputChannelItem)
|
||||
outputItemsChannel := make(chan *tapApi.OutputChannelItem)
|
||||
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
|
||||
tap.StartPassiveTapper(tapOpts, outputItemsChannel, extensions)
|
||||
|
||||
go filterHarItems(harOutputChannel, filteredHarChannel, getTrafficFilteringOptions())
|
||||
go api.StartReadingEntries(filteredHarChannel, nil)
|
||||
go api.StartReadingOutbound(outboundLinkOutputChannel)
|
||||
go filterItems(outputItemsChannel, filteredOutputItemsChannel, getTrafficFilteringOptions())
|
||||
go api.StartReadingEntries(filteredOutputItemsChannel, nil, extensionsMap)
|
||||
// go api.StartReadingOutbound(outboundLinkOutputChannel)
|
||||
|
||||
hostApi(nil)
|
||||
} else if *tapperMode {
|
||||
@@ -61,31 +73,32 @@ func main() {
|
||||
rlog.Infof("Filtering for the following authorities: %v", tap.GetFilterIPs())
|
||||
}
|
||||
|
||||
harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
|
||||
|
||||
// harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
|
||||
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
|
||||
tap.StartPassiveTapper(tapOpts, filteredOutputItemsChannel, extensions)
|
||||
socketConnection, err := shared.ConnectToSocketServer(*apiServerAddress, shared.DEFAULT_SOCKET_RETRIES, shared.DEFAULT_SOCKET_RETRY_SLEEP_TIME, false)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("Error connecting to socket server at %s %v", *apiServerAddress, err))
|
||||
}
|
||||
|
||||
go pipeTapChannelToSocket(socketConnection, harOutputChannel)
|
||||
go pipeOutboundLinksChannelToSocket(socketConnection, outboundLinkOutputChannel)
|
||||
go pipeTapChannelToSocket(socketConnection, filteredOutputItemsChannel)
|
||||
// go pipeOutboundLinksChannelToSocket(socketConnection, outboundLinkOutputChannel)
|
||||
} else if *apiServerMode {
|
||||
api.StartResolving(*namespace)
|
||||
|
||||
socketHarOutChannel := make(chan *tap.OutputChannelItem, 1000)
|
||||
filteredHarChannel := make(chan *tap.OutputChannelItem)
|
||||
outputItemsChannel := make(chan *tapApi.OutputChannelItem)
|
||||
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
|
||||
|
||||
go filterHarItems(socketHarOutChannel, filteredHarChannel, getTrafficFilteringOptions())
|
||||
go api.StartReadingEntries(filteredHarChannel, nil)
|
||||
go filterItems(outputItemsChannel, filteredOutputItemsChannel, getTrafficFilteringOptions())
|
||||
go api.StartReadingEntries(filteredOutputItemsChannel, nil, extensionsMap)
|
||||
|
||||
hostApi(socketHarOutChannel)
|
||||
hostApi(outputItemsChannel)
|
||||
} else if *harsReaderMode {
|
||||
socketHarOutChannel := make(chan *tap.OutputChannelItem, 1000)
|
||||
filteredHarChannel := make(chan *tap.OutputChannelItem)
|
||||
outputItemsChannel := make(chan *tapApi.OutputChannelItem, 1000)
|
||||
filteredHarChannel := make(chan *tapApi.OutputChannelItem)
|
||||
|
||||
go filterHarItems(socketHarOutChannel, filteredHarChannel, getTrafficFilteringOptions())
|
||||
go api.StartReadingEntries(filteredHarChannel, harsDir)
|
||||
go filterItems(outputItemsChannel, filteredHarChannel, getTrafficFilteringOptions())
|
||||
go api.StartReadingEntries(filteredHarChannel, harsDir, extensionsMap)
|
||||
hostApi(nil)
|
||||
}
|
||||
|
||||
@@ -96,7 +109,50 @@ func main() {
|
||||
rlog.Info("Exiting")
|
||||
}
|
||||
|
||||
func hostApi(socketHarOutputChannel chan<- *tap.OutputChannelItem) {
|
||||
func loadExtensions() {
|
||||
dir, _ := filepath.Abs(filepath.Dir(os.Args[0]))
|
||||
extensionsDir := path.Join(dir, "./extensions/")
|
||||
|
||||
files, err := ioutil.ReadDir(extensionsDir)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
extensions = make([]*tapApi.Extension, len(files))
|
||||
extensionsMap = make(map[string]*tapApi.Extension)
|
||||
for i, file := range files {
|
||||
filename := file.Name()
|
||||
log.Printf("Loading extension: %s\n", filename)
|
||||
extension := &tapApi.Extension{
|
||||
Path: path.Join(extensionsDir, filename),
|
||||
}
|
||||
plug, _ := plugin.Open(extension.Path)
|
||||
extension.Plug = plug
|
||||
symDissector, err := plug.Lookup("Dissector")
|
||||
|
||||
var dissector tapApi.Dissector
|
||||
var ok bool
|
||||
dissector, ok = symDissector.(tapApi.Dissector)
|
||||
if err != nil || !ok {
|
||||
panic(fmt.Sprintf("Failed to load the extension: %s\n", extension.Path))
|
||||
}
|
||||
dissector.Register(extension)
|
||||
extension.Dissector = dissector
|
||||
extensions[i] = extension
|
||||
extensionsMap[extension.Protocol.Name] = extension
|
||||
}
|
||||
|
||||
sort.Slice(extensions, func(i, j int) bool {
|
||||
return extensions[i].Protocol.Priority < extensions[j].Protocol.Priority
|
||||
})
|
||||
|
||||
for _, extension := range extensions {
|
||||
log.Printf("Extension Properties: %+v\n", extension)
|
||||
}
|
||||
|
||||
controllers.InitExtensionsMap(extensionsMap)
|
||||
}
|
||||
|
||||
func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
|
||||
app := gin.Default()
|
||||
|
||||
app.GET("/echo", func(c *gin.Context) {
|
||||
@@ -104,9 +160,10 @@ func hostApi(socketHarOutputChannel chan<- *tap.OutputChannelItem) {
|
||||
})
|
||||
|
||||
eventHandlers := api.RoutesEventHandlers{
|
||||
SocketHarOutChannel: socketHarOutputChannel,
|
||||
SocketOutChannel: socketHarOutputChannel,
|
||||
}
|
||||
|
||||
app.Use(DisableRootStaticCache())
|
||||
app.Use(static.ServeRoot("/", "./site"))
|
||||
app.Use(CORSMiddleware()) // This has to be called after the static middleware, does not work if its called before
|
||||
|
||||
@@ -119,6 +176,17 @@ func hostApi(socketHarOutputChannel chan<- *tap.OutputChannelItem) {
|
||||
utils.StartServer(app)
|
||||
}
|
||||
|
||||
func DisableRootStaticCache() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
if c.Request.RequestURI == "/" {
|
||||
// Disable cache only for the main static route
|
||||
c.Writer.Header().Set("Cache-Control", "no-store")
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
func CORSMiddleware() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
c.Writer.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
@@ -135,20 +203,34 @@ func CORSMiddleware() gin.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func parseEnvVar(env string) map[string][]string {
|
||||
var mapOfList map[string][]string
|
||||
|
||||
val, present := os.LookupEnv(env)
|
||||
|
||||
if !present {
|
||||
return mapOfList
|
||||
}
|
||||
|
||||
err := json.Unmarshal([]byte(val), &mapOfList)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("env var %s's value of %s is invalid! must be map[string][]string %v", env, mapOfList, err))
|
||||
}
|
||||
return mapOfList
|
||||
}
|
||||
|
||||
func getTapTargets() []string {
|
||||
nodeName := os.Getenv(shared.NodeNameEnvVar)
|
||||
var tappedAddressesPerNodeDict map[string][]string
|
||||
err := json.Unmarshal([]byte(os.Getenv(shared.TappedAddressesPerNodeDictEnvVar)), &tappedAddressesPerNodeDict)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("env var %s's value of %s is invalid! must be map[string][]string %v", shared.TappedAddressesPerNodeDictEnvVar, tappedAddressesPerNodeDict, err))
|
||||
}
|
||||
tappedAddressesPerNodeDict := parseEnvVar(shared.TappedAddressesPerNodeDictEnvVar)
|
||||
return tappedAddressesPerNodeDict[nodeName]
|
||||
}
|
||||
|
||||
func getTrafficFilteringOptions() *shared.TrafficFilteringOptions {
|
||||
filteringOptionsJson := os.Getenv(shared.MizuFilteringOptionsEnvVar)
|
||||
if filteringOptionsJson == "" {
|
||||
return nil
|
||||
return &shared.TrafficFilteringOptions{
|
||||
HealthChecksUserAgentHeaders: []string{},
|
||||
}
|
||||
}
|
||||
var filteringOptions shared.TrafficFilteringOptions
|
||||
err := json.Unmarshal([]byte(filteringOptionsJson), &filteringOptions)
|
||||
@@ -159,7 +241,7 @@ func getTrafficFilteringOptions() *shared.TrafficFilteringOptions {
|
||||
return &filteringOptions
|
||||
}
|
||||
|
||||
func filterHarItems(inChannel <-chan *tap.OutputChannelItem, outChannel chan *tap.OutputChannelItem, filterOptions *shared.TrafficFilteringOptions) {
|
||||
func filterItems(inChannel <-chan *tapApi.OutputChannelItem, outChannel chan *tapApi.OutputChannelItem, filterOptions *shared.TrafficFilteringOptions) {
|
||||
for message := range inChannel {
|
||||
if message.ConnectionInfo.IsOutgoing && api.CheckIsServiceIP(message.ConnectionInfo.ServerIP) {
|
||||
continue
|
||||
@@ -169,19 +251,27 @@ func filterHarItems(inChannel <-chan *tap.OutputChannelItem, outChannel chan *ta
|
||||
continue
|
||||
}
|
||||
|
||||
if !filterOptions.DisableRedaction {
|
||||
sensitiveDataFiltering.FilterSensitiveInfoFromHarRequest(message, filterOptions)
|
||||
}
|
||||
// if !filterOptions.DisableRedaction {
|
||||
// sensitiveDataFiltering.FilterSensitiveInfoFromHarRequest(message, filterOptions)
|
||||
// }
|
||||
|
||||
outChannel <- message
|
||||
}
|
||||
}
|
||||
|
||||
func isHealthCheckByUserAgent(message *tap.OutputChannelItem, userAgentsToIgnore []string) bool {
|
||||
for _, header := range message.HarEntry.Request.Headers {
|
||||
if strings.ToLower(header.Name) == "user-agent" {
|
||||
func isHealthCheckByUserAgent(item *tapApi.OutputChannelItem, userAgentsToIgnore []string) bool {
|
||||
if item.Protocol.Name != "http" {
|
||||
return false
|
||||
}
|
||||
|
||||
request := item.Pair.Request.Payload.(map[string]interface{})
|
||||
reqDetails := request["details"].(map[string]interface{})
|
||||
|
||||
for _, header := range reqDetails["headers"].([]interface{}) {
|
||||
h := header.(map[string]interface{})
|
||||
if strings.ToLower(h["name"].(string)) == "user-agent" {
|
||||
for _, userAgent := range userAgentsToIgnore {
|
||||
if strings.Contains(strings.ToLower(header.Value), strings.ToLower(userAgent)) {
|
||||
if strings.Contains(strings.ToLower(h["value"].(string)), strings.ToLower(userAgent)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
@@ -191,7 +281,7 @@ func isHealthCheckByUserAgent(message *tap.OutputChannelItem, userAgentsToIgnore
|
||||
return false
|
||||
}
|
||||
|
||||
func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-chan *tap.OutputChannelItem) {
|
||||
func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-chan *tapApi.OutputChannelItem) {
|
||||
if connection == nil {
|
||||
panic("Websocket connection is nil")
|
||||
}
|
||||
@@ -207,6 +297,8 @@ func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-cha
|
||||
continue
|
||||
}
|
||||
|
||||
// NOTE: This is where the `*tapApi.OutputChannelItem` leaves the code
|
||||
// and goes into the intermediate WebSocket.
|
||||
err = connection.WriteMessage(websocket.TextMessage, marshaledData)
|
||||
if err != nil {
|
||||
rlog.Infof("error sending message through socket server %s, (%v,%+v)\n", err, err, err)
|
||||
|
||||
@@ -5,8 +5,8 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"mizuserver/pkg/database"
|
||||
"mizuserver/pkg/holder"
|
||||
"mizuserver/pkg/providers"
|
||||
"net/url"
|
||||
"os"
|
||||
"path"
|
||||
@@ -14,12 +14,13 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"go.mongodb.org/mongo-driver/bson/primitive"
|
||||
|
||||
"github.com/google/martian/har"
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/tap"
|
||||
"go.mongodb.org/mongo-driver/bson/primitive"
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
|
||||
"mizuserver/pkg/database"
|
||||
"mizuserver/pkg/models"
|
||||
"mizuserver/pkg/resolver"
|
||||
"mizuserver/pkg/utils"
|
||||
@@ -49,11 +50,11 @@ func StartResolving(namespace string) {
|
||||
holder.SetResolver(res)
|
||||
}
|
||||
|
||||
func StartReadingEntries(harChannel <-chan *tap.OutputChannelItem, workingDir *string) {
|
||||
func StartReadingEntries(harChannel <-chan *tapApi.OutputChannelItem, workingDir *string, extensionsMap map[string]*tapApi.Extension) {
|
||||
if workingDir != nil && *workingDir != "" {
|
||||
startReadingFiles(*workingDir)
|
||||
} else {
|
||||
startReadingChannel(harChannel)
|
||||
startReadingChannel(harChannel, extensionsMap)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -87,30 +88,36 @@ func startReadingFiles(workingDir string) {
|
||||
decErr := json.NewDecoder(bufio.NewReader(file)).Decode(&inputHar)
|
||||
utils.CheckErr(decErr)
|
||||
|
||||
for _, entry := range inputHar.Log.Entries {
|
||||
time.Sleep(time.Millisecond * 250)
|
||||
connectionInfo := &tap.ConnectionInfo{
|
||||
ClientIP: fileInfo.Name(),
|
||||
ClientPort: "",
|
||||
ServerIP: "",
|
||||
ServerPort: "",
|
||||
IsOutgoing: false,
|
||||
}
|
||||
saveHarToDb(entry, connectionInfo)
|
||||
}
|
||||
// for _, entry := range inputHar.Log.Entries {
|
||||
// time.Sleep(time.Millisecond * 250)
|
||||
// // connectionInfo := &tap.ConnectionInfo{
|
||||
// // ClientIP: fileInfo.Name(),
|
||||
// // ClientPort: "",
|
||||
// // ServerIP: "",
|
||||
// // ServerPort: "",
|
||||
// // IsOutgoing: false,
|
||||
// // }
|
||||
// // saveHarToDb(entry, connectionInfo)
|
||||
// }
|
||||
rmErr := os.Remove(inputFilePath)
|
||||
utils.CheckErr(rmErr)
|
||||
}
|
||||
}
|
||||
|
||||
func startReadingChannel(outputItems <-chan *tap.OutputChannelItem) {
|
||||
func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extensionsMap map[string]*tapApi.Extension) {
|
||||
if outputItems == nil {
|
||||
panic("Channel of captured messages is nil")
|
||||
}
|
||||
|
||||
for item := range outputItems {
|
||||
providers.EntryAdded()
|
||||
saveHarToDb(item.HarEntry, item.ConnectionInfo)
|
||||
extension := extensionsMap[item.Protocol.Name]
|
||||
resolvedSource, resolvedDestionation := resolveIP(item.ConnectionInfo)
|
||||
mizuEntry := extension.Dissector.Analyze(item, primitive.NewObjectID().Hex(), resolvedSource, resolvedDestionation)
|
||||
baseEntry := extension.Dissector.Summarize(mizuEntry)
|
||||
mizuEntry.EstimatedSizeBytes = getEstimatedEntrySizeBytes(mizuEntry)
|
||||
database.CreateEntry(mizuEntry)
|
||||
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(baseEntry)
|
||||
BroadcastToBrowserClients(baseEntryBytes)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -121,14 +128,7 @@ func StartReadingOutbound(outboundLinkChannel <-chan *tap.OutboundLink) {
|
||||
}
|
||||
}
|
||||
|
||||
func saveHarToDb(entry *har.Entry, connectionInfo *tap.ConnectionInfo) {
|
||||
entryBytes, _ := json.Marshal(entry)
|
||||
serviceName, urlPath := getServiceNameFromUrl(entry.Request.URL)
|
||||
entryId := primitive.NewObjectID().Hex()
|
||||
var (
|
||||
resolvedSource string
|
||||
resolvedDestination string
|
||||
)
|
||||
func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, resolvedDestination string) {
|
||||
if k8sResolver != nil {
|
||||
unresolvedSource := connectionInfo.ClientIP
|
||||
resolvedSource = k8sResolver.Resolve(unresolvedSource)
|
||||
@@ -147,32 +147,7 @@ func saveHarToDb(entry *har.Entry, connectionInfo *tap.ConnectionInfo) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
mizuEntry := models.MizuEntry{
|
||||
EntryId: entryId,
|
||||
Entry: string(entryBytes), // simple way to store it and not convert to bytes
|
||||
Service: serviceName,
|
||||
Url: entry.Request.URL,
|
||||
Path: urlPath,
|
||||
Method: entry.Request.Method,
|
||||
Status: entry.Response.Status,
|
||||
RequestSenderIp: connectionInfo.ClientIP,
|
||||
Timestamp: entry.StartedDateTime.UnixNano() / int64(time.Millisecond),
|
||||
ResolvedSource: resolvedSource,
|
||||
ResolvedDestination: resolvedDestination,
|
||||
IsOutgoing: connectionInfo.IsOutgoing,
|
||||
}
|
||||
mizuEntry.EstimatedSizeBytes = getEstimatedEntrySizeBytes(mizuEntry)
|
||||
database.CreateEntry(&mizuEntry)
|
||||
|
||||
baseEntry := models.BaseEntryDetails{}
|
||||
if err := models.GetEntry(&mizuEntry, &baseEntry); err != nil {
|
||||
return
|
||||
}
|
||||
baseEntry.Rules = models.RunValidationRulesState(*entry, serviceName)
|
||||
baseEntry.Latency = entry.Timings.Receive
|
||||
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(&baseEntry)
|
||||
BroadcastToBrowserClients(baseEntryBytes)
|
||||
return resolvedSource, resolvedDestination
|
||||
}
|
||||
|
||||
func getServiceNameFromUrl(inputUrl string) (string, string) {
|
||||
@@ -182,11 +157,14 @@ func getServiceNameFromUrl(inputUrl string) (string, string) {
|
||||
}
|
||||
|
||||
func CheckIsServiceIP(address string) bool {
|
||||
if k8sResolver == nil {
|
||||
return false
|
||||
}
|
||||
return k8sResolver.CheckIsServiceIP(address)
|
||||
}
|
||||
|
||||
// gives a rough estimate of the size this will take up in the db, good enough for maintaining db size limit accurately
|
||||
func getEstimatedEntrySizeBytes(mizuEntry models.MizuEntry) int {
|
||||
func getEstimatedEntrySizeBytes(mizuEntry *tapApi.MizuEntry) int {
|
||||
sizeBytes := len(mizuEntry.Entry)
|
||||
sizeBytes += len(mizuEntry.EntryId)
|
||||
sizeBytes += len(mizuEntry.Service)
|
||||
|
||||
@@ -8,9 +8,10 @@ import (
|
||||
"mizuserver/pkg/up9"
|
||||
"sync"
|
||||
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
"github.com/up9inc/mizu/tap"
|
||||
)
|
||||
|
||||
var browserClientSocketUUIDs = make([]int, 0)
|
||||
@@ -18,7 +19,7 @@ var socketListLock = sync.Mutex{}
|
||||
|
||||
type RoutesEventHandlers struct {
|
||||
EventHandlers
|
||||
SocketHarOutChannel chan<- *tap.OutputChannelItem
|
||||
SocketOutChannel chan<- *tapApi.OutputChannelItem
|
||||
}
|
||||
|
||||
func init() {
|
||||
@@ -73,7 +74,8 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
|
||||
if err != nil {
|
||||
rlog.Infof("Could not unmarshal message of message type %s %v\n", socketMessageBase.MessageType, err)
|
||||
} else {
|
||||
h.SocketHarOutChannel <- tappedEntryMessage.Data
|
||||
// NOTE: This is where the message comes back from the intermediate WebSocket to code.
|
||||
h.SocketOutChannel <- tappedEntryMessage.Data
|
||||
}
|
||||
case shared.WebSocketMessageTypeUpdateStatus:
|
||||
var statusMessage shared.WebSocketStatusMessage
|
||||
|
||||
@@ -3,6 +3,7 @@ package controllers
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/google/martian/har"
|
||||
"mizuserver/pkg/database"
|
||||
"mizuserver/pkg/models"
|
||||
"mizuserver/pkg/providers"
|
||||
@@ -10,14 +11,20 @@ import (
|
||||
"mizuserver/pkg/utils"
|
||||
"mizuserver/pkg/validation"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/martian/har"
|
||||
"github.com/romana/rlog"
|
||||
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
var extensionsMap map[string]*tapApi.Extension // global
|
||||
|
||||
func InitExtensionsMap(ref map[string]*tapApi.Extension) {
|
||||
extensionsMap = ref
|
||||
}
|
||||
|
||||
func GetEntries(c *gin.Context) {
|
||||
entriesFilter := &models.EntriesFilter{}
|
||||
|
||||
@@ -31,7 +38,7 @@ func GetEntries(c *gin.Context) {
|
||||
|
||||
order := database.OperatorToOrderMapping[entriesFilter.Operator]
|
||||
operatorSymbol := database.OperatorToSymbolMapping[entriesFilter.Operator]
|
||||
var entries []models.MizuEntry
|
||||
var entries []tapApi.MizuEntry
|
||||
database.GetEntriesTable().
|
||||
Order(fmt.Sprintf("timestamp %s", order)).
|
||||
Where(fmt.Sprintf("timestamp %s %v", operatorSymbol, entriesFilter.Timestamp)).
|
||||
@@ -40,13 +47,13 @@ func GetEntries(c *gin.Context) {
|
||||
Find(&entries)
|
||||
|
||||
if len(entries) > 0 && order == database.OrderDesc {
|
||||
// the entries always order from oldest to newest so we should revers
|
||||
// the entries always order from oldest to newest - we should reverse
|
||||
utils.ReverseSlice(entries)
|
||||
}
|
||||
|
||||
baseEntries := make([]models.BaseEntryDetails, 0)
|
||||
baseEntries := make([]tapApi.BaseEntryDetails, 0)
|
||||
for _, data := range entries {
|
||||
harEntry := models.BaseEntryDetails{}
|
||||
harEntry := tapApi.BaseEntryDetails{}
|
||||
if err := models.GetEntry(&data, &harEntry); err != nil {
|
||||
continue
|
||||
}
|
||||
@@ -56,93 +63,6 @@ func GetEntries(c *gin.Context) {
|
||||
c.JSON(http.StatusOK, baseEntries)
|
||||
}
|
||||
|
||||
func GetHARs(c *gin.Context) {
|
||||
entriesFilter := &models.HarFetchRequestQuery{}
|
||||
order := database.OrderDesc
|
||||
if err := c.BindQuery(entriesFilter); err != nil {
|
||||
c.JSON(http.StatusBadRequest, err)
|
||||
}
|
||||
err := validation.Validate(entriesFilter)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, err)
|
||||
}
|
||||
|
||||
var timestampFrom, timestampTo int64
|
||||
|
||||
if entriesFilter.From < 0 {
|
||||
timestampFrom = 0
|
||||
} else {
|
||||
timestampFrom = entriesFilter.From
|
||||
}
|
||||
if entriesFilter.To <= 0 {
|
||||
timestampTo = time.Now().UnixNano() / int64(time.Millisecond)
|
||||
} else {
|
||||
timestampTo = entriesFilter.To
|
||||
}
|
||||
|
||||
var entries []models.MizuEntry
|
||||
database.GetEntriesTable().
|
||||
Where(fmt.Sprintf("timestamp BETWEEN %v AND %v", timestampFrom, timestampTo)).
|
||||
Order(fmt.Sprintf("timestamp %s", order)).
|
||||
Find(&entries)
|
||||
|
||||
if len(entries) > 0 {
|
||||
// the entries always order from oldest to newest so we should revers
|
||||
utils.ReverseSlice(entries)
|
||||
}
|
||||
|
||||
harsObject := map[string]*models.ExtendedHAR{}
|
||||
|
||||
for _, entryData := range entries {
|
||||
var harEntry har.Entry
|
||||
_ = json.Unmarshal([]byte(entryData.Entry), &harEntry)
|
||||
if entryData.ResolvedDestination != "" {
|
||||
harEntry.Request.URL = utils.SetHostname(harEntry.Request.URL, entryData.ResolvedDestination)
|
||||
}
|
||||
|
||||
var fileName string
|
||||
sourceOfEntry := entryData.ResolvedSource
|
||||
if sourceOfEntry != "" {
|
||||
// naively assumes the proper service source is http
|
||||
sourceOfEntry = fmt.Sprintf("http://%s", sourceOfEntry)
|
||||
//replace / from the file name cause they end up creating a corrupted folder
|
||||
fileName = fmt.Sprintf("%s.har", strings.ReplaceAll(sourceOfEntry, "/", "_"))
|
||||
} else {
|
||||
fileName = "unknown_source.har"
|
||||
}
|
||||
if harOfSource, ok := harsObject[fileName]; ok {
|
||||
harOfSource.Log.Entries = append(harOfSource.Log.Entries, &harEntry)
|
||||
} else {
|
||||
var entriesHar []*har.Entry
|
||||
entriesHar = append(entriesHar, &harEntry)
|
||||
harsObject[fileName] = &models.ExtendedHAR{
|
||||
Log: &models.ExtendedLog{
|
||||
Version: "1.2",
|
||||
Creator: &models.ExtendedCreator{
|
||||
Creator: &har.Creator{
|
||||
Name: "mizu",
|
||||
Version: "0.0.2",
|
||||
},
|
||||
},
|
||||
Entries: entriesHar,
|
||||
},
|
||||
}
|
||||
// leave undefined when no source is present, otherwise modeler assumes source is empty string ""
|
||||
if sourceOfEntry != "" {
|
||||
harsObject[fileName].Log.Creator.Source = &sourceOfEntry
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
retObj := map[string][]byte{}
|
||||
for k, v := range harsObject {
|
||||
bytesData, _ := json.Marshal(v)
|
||||
retObj[k] = bytesData
|
||||
}
|
||||
buffer := utils.ZipData(retObj)
|
||||
c.Data(http.StatusOK, "application/octet-stream", buffer.Bytes())
|
||||
}
|
||||
|
||||
func UploadEntries(c *gin.Context) {
|
||||
rlog.Infof("Upload entries - started\n")
|
||||
|
||||
@@ -194,45 +114,44 @@ func GetFullEntries(c *gin.Context) {
|
||||
timestampTo = entriesFilter.To
|
||||
}
|
||||
|
||||
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo)
|
||||
result := make([]models.FullEntryDetails, 0)
|
||||
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo, nil)
|
||||
|
||||
result := make([]har.Entry, 0)
|
||||
for _, data := range entriesArray {
|
||||
harEntry := models.FullEntryDetails{}
|
||||
if err := models.GetEntry(&data, &harEntry); err != nil {
|
||||
var pair tapApi.RequestResponsePair
|
||||
if err := json.Unmarshal([]byte(data.Entry), &pair); err != nil {
|
||||
continue
|
||||
}
|
||||
result = append(result, harEntry)
|
||||
harEntry, err := utils.NewEntry(&pair)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
result = append(result, *harEntry)
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, result)
|
||||
}
|
||||
|
||||
func GetEntry(c *gin.Context) {
|
||||
var entryData models.MizuEntry
|
||||
var entryData tapApi.MizuEntry
|
||||
database.GetEntriesTable().
|
||||
Where(map[string]string{"entryId": c.Param("entryId")}).
|
||||
First(&entryData)
|
||||
|
||||
fullEntry := models.FullEntryDetails{}
|
||||
if err := models.GetEntry(&entryData, &fullEntry); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, map[string]interface{}{
|
||||
"error": true,
|
||||
"msg": "Can't get entry details",
|
||||
})
|
||||
}
|
||||
fullEntryWithPolicy := models.FullEntryWithPolicy{}
|
||||
if err := models.GetEntry(&entryData, &fullEntryWithPolicy); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, map[string]interface{}{
|
||||
"error": true,
|
||||
"msg": "Can't get entry details",
|
||||
})
|
||||
}
|
||||
c.JSON(http.StatusOK, fullEntryWithPolicy)
|
||||
extension := extensionsMap[entryData.ProtocolName]
|
||||
protocol, representation, bodySize, _ := extension.Dissector.Represent(&entryData)
|
||||
c.JSON(http.StatusOK, tapApi.MizuEntryWrapper{
|
||||
Protocol: protocol,
|
||||
Representation: string(representation),
|
||||
BodySize: bodySize,
|
||||
Data: entryData,
|
||||
})
|
||||
}
|
||||
|
||||
func DeleteAllEntries(c *gin.Context) {
|
||||
database.GetEntriesTable().
|
||||
Where("1 = 1").
|
||||
Delete(&models.MizuEntry{})
|
||||
Delete(&tapApi.MizuEntry{})
|
||||
|
||||
c.JSON(http.StatusOK, map[string]string{
|
||||
"msg": "Success",
|
||||
|
||||
@@ -2,16 +2,18 @@ package database
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"mizuserver/pkg/utils"
|
||||
"time"
|
||||
|
||||
"gorm.io/driver/sqlite"
|
||||
"gorm.io/gorm"
|
||||
"gorm.io/gorm/logger"
|
||||
"mizuserver/pkg/models"
|
||||
"mizuserver/pkg/utils"
|
||||
"time"
|
||||
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
const (
|
||||
DBPath = "./entries.db"
|
||||
DBPath = "./entries.db"
|
||||
OrderDesc = "desc"
|
||||
OrderAsc = "asc"
|
||||
LT = "lt"
|
||||
@@ -19,8 +21,8 @@ const (
|
||||
)
|
||||
|
||||
var (
|
||||
DB *gorm.DB
|
||||
IsDBLocked = false
|
||||
DB *gorm.DB
|
||||
IsDBLocked = false
|
||||
OperatorToSymbolMapping = map[string]string{
|
||||
LT: "<",
|
||||
GT: ">",
|
||||
@@ -40,7 +42,7 @@ func GetEntriesTable() *gorm.DB {
|
||||
return DB.Table("mizu_entries")
|
||||
}
|
||||
|
||||
func CreateEntry(entry *models.MizuEntry) {
|
||||
func CreateEntry(entry *tapApi.MizuEntry) {
|
||||
if IsDBLocked {
|
||||
return
|
||||
}
|
||||
@@ -51,15 +53,20 @@ func initDataBase(databasePath string) *gorm.DB {
|
||||
temp, _ := gorm.Open(sqlite.Open(databasePath), &gorm.Config{
|
||||
Logger: &utils.TruncatingLogger{LogLevel: logger.Warn, SlowThreshold: 500 * time.Millisecond},
|
||||
})
|
||||
_ = temp.AutoMigrate(&models.MizuEntry{}) // this will ensure table is created
|
||||
_ = temp.AutoMigrate(&tapApi.MizuEntry{}) // this will ensure table is created
|
||||
return temp
|
||||
}
|
||||
|
||||
|
||||
func GetEntriesFromDb(timestampFrom int64, timestampTo int64) []models.MizuEntry {
|
||||
func GetEntriesFromDb(timestampFrom int64, timestampTo int64, protocolName *string) []tapApi.MizuEntry {
|
||||
order := OrderDesc
|
||||
var entries []models.MizuEntry
|
||||
protocolNameCondition := "1 = 1"
|
||||
if protocolName != nil {
|
||||
protocolNameCondition = fmt.Sprintf("protocolKey = '%s'", *protocolName)
|
||||
}
|
||||
|
||||
var entries []tapApi.MizuEntry
|
||||
GetEntriesTable().
|
||||
Where(protocolNameCondition).
|
||||
Where(fmt.Sprintf("timestamp BETWEEN %v AND %v", timestampFrom, timestampTo)).
|
||||
Order(fmt.Sprintf("timestamp %s", order)).
|
||||
Find(&entries)
|
||||
@@ -70,4 +77,3 @@ func GetEntriesFromDb(timestampFrom int64, timestampTo int64) []models.MizuEntry
|
||||
}
|
||||
return entries
|
||||
}
|
||||
|
||||
|
||||
@@ -1,16 +1,17 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/fsnotify/fsnotify"
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
"github.com/up9inc/mizu/shared/debounce"
|
||||
"github.com/up9inc/mizu/shared/units"
|
||||
"log"
|
||||
"mizuserver/pkg/models"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
const percentageOfMaxSizeBytesToPrune = 15
|
||||
@@ -99,7 +100,7 @@ func pruneOldEntries(currentFileSize int64) {
|
||||
if bytesToBeRemoved >= amountOfBytesToTrim {
|
||||
break
|
||||
}
|
||||
var entry models.MizuEntry
|
||||
var entry tapApi.MizuEntry
|
||||
err = DB.ScanRows(rows, &entry)
|
||||
if err != nil {
|
||||
rlog.Errorf("Error scanning db row: %v", err)
|
||||
@@ -111,7 +112,7 @@ func pruneOldEntries(currentFileSize int64) {
|
||||
}
|
||||
|
||||
if len(entryIdsToRemove) > 0 {
|
||||
GetEntriesTable().Where(entryIdsToRemove).Delete(models.MizuEntry{})
|
||||
GetEntriesTable().Where(entryIdsToRemove).Delete(tapApi.MizuEntry{})
|
||||
// VACUUM causes sqlite to shrink the db file after rows have been deleted, the db file will not shrink without this
|
||||
DB.Exec("VACUUM")
|
||||
rlog.Errorf("Removed %d rows and cleared %s", len(entryIdsToRemove), units.BytesToHumanReadable(bytesToBeRemoved))
|
||||
|
||||
@@ -2,123 +2,27 @@ package models
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
"mizuserver/pkg/rules"
|
||||
"mizuserver/pkg/utils"
|
||||
"time"
|
||||
|
||||
"github.com/google/martian/har"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
"github.com/up9inc/mizu/tap"
|
||||
)
|
||||
|
||||
type DataUnmarshaler interface {
|
||||
UnmarshalData(*MizuEntry) error
|
||||
}
|
||||
|
||||
func GetEntry(r *MizuEntry, v DataUnmarshaler) error {
|
||||
func GetEntry(r *tapApi.MizuEntry, v tapApi.DataUnmarshaler) error {
|
||||
return v.UnmarshalData(r)
|
||||
}
|
||||
|
||||
type MizuEntry struct {
|
||||
ID uint `gorm:"primarykey"`
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
Entry string `json:"entry,omitempty" gorm:"column:entry"`
|
||||
EntryId string `json:"entryId" gorm:"column:entryId"`
|
||||
Url string `json:"url" gorm:"column:url"`
|
||||
Method string `json:"method" gorm:"column:method"`
|
||||
Status int `json:"status" gorm:"column:status"`
|
||||
RequestSenderIp string `json:"requestSenderIp" gorm:"column:requestSenderIp"`
|
||||
Service string `json:"service" gorm:"column:service"`
|
||||
Timestamp int64 `json:"timestamp" gorm:"column:timestamp"`
|
||||
Path string `json:"path" gorm:"column:path"`
|
||||
ResolvedSource string `json:"resolvedSource,omitempty" gorm:"column:resolvedSource"`
|
||||
ResolvedDestination string `json:"resolvedDestination,omitempty" gorm:"column:resolvedDestination"`
|
||||
IsOutgoing bool `json:"isOutgoing,omitempty" gorm:"column:isOutgoing"`
|
||||
EstimatedSizeBytes int `json:"-" gorm:"column:estimatedSizeBytes"`
|
||||
}
|
||||
|
||||
type BaseEntryDetails struct {
|
||||
Id string `json:"id,omitempty"`
|
||||
Url string `json:"url,omitempty"`
|
||||
RequestSenderIp string `json:"requestSenderIp,omitempty"`
|
||||
Service string `json:"service,omitempty"`
|
||||
Path string `json:"path,omitempty"`
|
||||
StatusCode int `json:"statusCode,omitempty"`
|
||||
Method string `json:"method,omitempty"`
|
||||
Timestamp int64 `json:"timestamp,omitempty"`
|
||||
IsOutgoing bool `json:"isOutgoing,omitempty"`
|
||||
Latency int64 `json:"latency,omitempty"`
|
||||
Rules ApplicableRules `json:"rules,omitempty"`
|
||||
}
|
||||
|
||||
type ApplicableRules struct {
|
||||
Latency int64 `json:"latency,omitempty"`
|
||||
Status bool `json:"status,omitempty"`
|
||||
NumberOfRules int `json:"numberOfRules,omitempty"`
|
||||
}
|
||||
|
||||
func NewApplicableRules(status bool, latency int64, number int) ApplicableRules {
|
||||
ar := ApplicableRules{}
|
||||
ar.Status = status
|
||||
ar.Latency = latency
|
||||
ar.NumberOfRules = number
|
||||
return ar
|
||||
}
|
||||
|
||||
type FullEntryDetails struct {
|
||||
har.Entry
|
||||
}
|
||||
|
||||
type FullEntryDetailsExtra struct {
|
||||
har.Entry
|
||||
}
|
||||
|
||||
func (bed *BaseEntryDetails) UnmarshalData(entry *MizuEntry) error {
|
||||
entryUrl := entry.Url
|
||||
service := entry.Service
|
||||
if entry.ResolvedDestination != "" {
|
||||
entryUrl = utils.SetHostname(entryUrl, entry.ResolvedDestination)
|
||||
service = utils.SetHostname(service, entry.ResolvedDestination)
|
||||
}
|
||||
bed.Id = entry.EntryId
|
||||
bed.Url = entryUrl
|
||||
bed.Service = service
|
||||
bed.Path = entry.Path
|
||||
bed.StatusCode = entry.Status
|
||||
bed.Method = entry.Method
|
||||
bed.Timestamp = entry.Timestamp
|
||||
bed.RequestSenderIp = entry.RequestSenderIp
|
||||
bed.IsOutgoing = entry.IsOutgoing
|
||||
return nil
|
||||
}
|
||||
|
||||
func (fed *FullEntryDetails) UnmarshalData(entry *MizuEntry) error {
|
||||
if err := json.Unmarshal([]byte(entry.Entry), &fed.Entry); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if entry.ResolvedDestination != "" {
|
||||
fed.Entry.Request.URL = utils.SetHostname(fed.Entry.Request.URL, entry.ResolvedDestination)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (fedex *FullEntryDetailsExtra) UnmarshalData(entry *MizuEntry) error {
|
||||
if err := json.Unmarshal([]byte(entry.Entry), &fedex.Entry); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if entry.ResolvedSource != "" {
|
||||
fedex.Entry.Request.Headers = append(fedex.Request.Headers, har.Header{Name: "x-mizu-source", Value: entry.ResolvedSource})
|
||||
}
|
||||
if entry.ResolvedDestination != "" {
|
||||
fedex.Entry.Request.Headers = append(fedex.Request.Headers, har.Header{Name: "x-mizu-destination", Value: entry.ResolvedDestination})
|
||||
fedex.Entry.Request.URL = utils.SetHostname(fedex.Entry.Request.URL, entry.ResolvedDestination)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
// TODO: until we fixed the Rules feature
|
||||
//func NewApplicableRules(status bool, latency int64, number int) tapApi.ApplicableRules {
|
||||
// ar := tapApi.ApplicableRules{}
|
||||
// ar.Status = status
|
||||
// ar.Latency = latency
|
||||
// ar.NumberOfRules = number
|
||||
// return ar
|
||||
//}
|
||||
|
||||
type EntriesFilter struct {
|
||||
Limit int `form:"limit" validate:"required,min=1,max=200"`
|
||||
@@ -138,12 +42,12 @@ type HarFetchRequestQuery struct {
|
||||
|
||||
type WebSocketEntryMessage struct {
|
||||
*shared.WebSocketMessageMetadata
|
||||
Data *BaseEntryDetails `json:"data,omitempty"`
|
||||
Data *tapApi.BaseEntryDetails `json:"data,omitempty"`
|
||||
}
|
||||
|
||||
type WebSocketTappedEntryMessage struct {
|
||||
*shared.WebSocketMessageMetadata
|
||||
Data *tap.OutputChannelItem
|
||||
Data *tapApi.OutputChannelItem
|
||||
}
|
||||
|
||||
type WebsocketOutboundLinkMessage struct {
|
||||
@@ -151,7 +55,7 @@ type WebsocketOutboundLinkMessage struct {
|
||||
Data *tap.OutboundLink
|
||||
}
|
||||
|
||||
func CreateBaseEntryWebSocketMessage(base *BaseEntryDetails) ([]byte, error) {
|
||||
func CreateBaseEntryWebSocketMessage(base *tapApi.BaseEntryDetails) ([]byte, error) {
|
||||
message := &WebSocketEntryMessage{
|
||||
WebSocketMessageMetadata: &shared.WebSocketMessageMetadata{
|
||||
MessageType: shared.WebSocketMessageTypeEntry,
|
||||
@@ -161,7 +65,7 @@ func CreateBaseEntryWebSocketMessage(base *BaseEntryDetails) ([]byte, error) {
|
||||
return json.Marshal(message)
|
||||
}
|
||||
|
||||
func CreateWebsocketTappedEntryMessage(base *tap.OutputChannelItem) ([]byte, error) {
|
||||
func CreateWebsocketTappedEntryMessage(base *tapApi.OutputChannelItem) ([]byte, error) {
|
||||
message := &WebSocketTappedEntryMessage{
|
||||
WebSocketMessageMetadata: &shared.WebSocketMessageMetadata{
|
||||
MessageType: shared.WebSocketMessageTypeTappedEntry,
|
||||
@@ -207,10 +111,16 @@ type FullEntryWithPolicy struct {
|
||||
Service string `json:"service"`
|
||||
}
|
||||
|
||||
func (fewp *FullEntryWithPolicy) UnmarshalData(entry *MizuEntry) error {
|
||||
if err := json.Unmarshal([]byte(entry.Entry), &fewp.Entry); err != nil {
|
||||
func (fewp *FullEntryWithPolicy) UnmarshalData(entry *tapApi.MizuEntry) error {
|
||||
var pair tapApi.RequestResponsePair
|
||||
if err := json.Unmarshal([]byte(entry.Entry), &pair); err != nil {
|
||||
return err
|
||||
}
|
||||
harEntry, err := utils.NewEntry(&pair)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fewp.Entry = *harEntry
|
||||
|
||||
_, resultPolicyToSend := rules.MatchRequestPolicy(fewp.Entry, entry.Service)
|
||||
fewp.RulesMatched = resultPolicyToSend
|
||||
@@ -218,9 +128,10 @@ func (fewp *FullEntryWithPolicy) UnmarshalData(entry *MizuEntry) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func RunValidationRulesState(harEntry har.Entry, service string) ApplicableRules {
|
||||
numberOfRules, resultPolicyToSend := rules.MatchRequestPolicy(harEntry, service)
|
||||
statusPolicyToSend, latency, numberOfRules := rules.PassedValidationRules(resultPolicyToSend, numberOfRules)
|
||||
ar := NewApplicableRules(statusPolicyToSend, latency, numberOfRules)
|
||||
return ar
|
||||
}
|
||||
// TODO: until we fixed the Rules feature
|
||||
//func RunValidationRulesState(harEntry har.Entry, service string) tapApi.ApplicableRules {
|
||||
// numberOfRules, resultPolicyToSend := rules.MatchRequestPolicy(harEntry, service)
|
||||
// statusPolicyToSend, latency, numberOfRules := rules.PassedValidationRules(resultPolicyToSend, numberOfRules)
|
||||
// ar := NewApplicableRules(statusPolicyToSend, latency, numberOfRules)
|
||||
// return ar
|
||||
//}
|
||||
|
||||
@@ -4,10 +4,11 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/romana/rlog"
|
||||
k8serrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
|
||||
"github.com/orcaman/concurrent-map"
|
||||
cmap "github.com/orcaman/concurrent-map"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
|
||||
@@ -15,8 +15,6 @@ func EntriesRoutes(ginApp *gin.Engine) {
|
||||
routeGroup.GET("/uploadEntries", controllers.UploadEntries)
|
||||
routeGroup.GET("/resolving", controllers.GetCurrentResolvingInformation)
|
||||
|
||||
routeGroup.GET("/har", controllers.GetHARs)
|
||||
|
||||
routeGroup.GET("/resetDB", controllers.DeleteAllEntries) // get single (full) entry
|
||||
routeGroup.GET("/generalStats", controllers.GetGeneralStats) // get general stats about entries in DB
|
||||
|
||||
|
||||
@@ -1,200 +0,0 @@
|
||||
package sensitiveDataFiltering
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"encoding/xml"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/up9inc/mizu/tap"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/beevik/etree"
|
||||
"github.com/google/martian/har"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
)
|
||||
|
||||
func FilterSensitiveInfoFromHarRequest(harOutputItem *tap.OutputChannelItem, options *shared.TrafficFilteringOptions) {
|
||||
harOutputItem.HarEntry.Request.Headers = filterHarHeaders(harOutputItem.HarEntry.Request.Headers)
|
||||
harOutputItem.HarEntry.Response.Headers = filterHarHeaders(harOutputItem.HarEntry.Response.Headers)
|
||||
|
||||
harOutputItem.HarEntry.Request.Cookies = make([]har.Cookie, 0, 0)
|
||||
harOutputItem.HarEntry.Response.Cookies = make([]har.Cookie, 0, 0)
|
||||
|
||||
harOutputItem.HarEntry.Request.URL = filterUrl(harOutputItem.HarEntry.Request.URL)
|
||||
for i, queryString := range harOutputItem.HarEntry.Request.QueryString {
|
||||
if isFieldNameSensitive(queryString.Name) {
|
||||
harOutputItem.HarEntry.Request.QueryString[i].Value = maskedFieldPlaceholderValue
|
||||
}
|
||||
}
|
||||
|
||||
if harOutputItem.HarEntry.Request.PostData != nil {
|
||||
requestContentType := getContentTypeHeaderValue(harOutputItem.HarEntry.Request.Headers)
|
||||
filteredRequestBody, err := filterHttpBody([]byte(harOutputItem.HarEntry.Request.PostData.Text), requestContentType, options)
|
||||
if err == nil {
|
||||
harOutputItem.HarEntry.Request.PostData.Text = string(filteredRequestBody)
|
||||
}
|
||||
}
|
||||
if harOutputItem.HarEntry.Response.Content != nil {
|
||||
responseContentType := getContentTypeHeaderValue(harOutputItem.HarEntry.Response.Headers)
|
||||
filteredResponseBody, err := filterHttpBody(harOutputItem.HarEntry.Response.Content.Text, responseContentType, options)
|
||||
if err == nil {
|
||||
harOutputItem.HarEntry.Response.Content.Text = filteredResponseBody
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func filterHarHeaders(headers []har.Header) []har.Header {
|
||||
newHeaders := make([]har.Header, 0)
|
||||
for i, header := range headers {
|
||||
if strings.ToLower(header.Name) == "cookie" {
|
||||
continue
|
||||
} else if isFieldNameSensitive(header.Name) {
|
||||
newHeaders = append(newHeaders, har.Header{Name: header.Name, Value: maskedFieldPlaceholderValue})
|
||||
headers[i].Value = maskedFieldPlaceholderValue
|
||||
} else {
|
||||
newHeaders = append(newHeaders, header)
|
||||
}
|
||||
}
|
||||
return newHeaders
|
||||
}
|
||||
|
||||
func getContentTypeHeaderValue(headers []har.Header) string {
|
||||
for _, header := range headers {
|
||||
if strings.ToLower(header.Name) == "content-type" {
|
||||
return header.Value
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func isFieldNameSensitive(fieldName string) bool {
|
||||
name := strings.ToLower(fieldName)
|
||||
name = strings.ReplaceAll(name, "_", "")
|
||||
name = strings.ReplaceAll(name, "-", "")
|
||||
name = strings.ReplaceAll(name, " ", "")
|
||||
|
||||
for _, sensitiveField := range personallyIdentifiableDataFields {
|
||||
if strings.Contains(name, sensitiveField) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func filterHttpBody(bytes []byte, contentType string, options *shared.TrafficFilteringOptions) ([]byte, error) {
|
||||
mimeType := strings.Split(contentType, ";")[0]
|
||||
switch strings.ToLower(mimeType) {
|
||||
case "application/json":
|
||||
return filterJsonBody(bytes)
|
||||
case "text/html":
|
||||
fallthrough
|
||||
case "application/xhtml+xml":
|
||||
fallthrough
|
||||
case "text/xml":
|
||||
fallthrough
|
||||
case "application/xml":
|
||||
return filterXmlEtree(bytes)
|
||||
case "text/plain":
|
||||
if options != nil && options.PlainTextMaskingRegexes != nil {
|
||||
return filterPlainText(bytes, options), nil
|
||||
}
|
||||
}
|
||||
return bytes, nil
|
||||
}
|
||||
|
||||
func filterPlainText(bytes []byte, options *shared.TrafficFilteringOptions) []byte {
|
||||
for _, regex := range options.PlainTextMaskingRegexes {
|
||||
bytes = regex.ReplaceAll(bytes, []byte(maskedFieldPlaceholderValue))
|
||||
}
|
||||
return bytes
|
||||
}
|
||||
|
||||
func filterXmlEtree(bytes []byte) ([]byte, error) {
|
||||
if !IsValidXML(bytes) {
|
||||
return nil, errors.New("Invalid XML")
|
||||
}
|
||||
xmlDoc := etree.NewDocument()
|
||||
err := xmlDoc.ReadFromBytes(bytes)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
} else {
|
||||
filterXmlElement(xmlDoc.Root())
|
||||
}
|
||||
return xmlDoc.WriteToBytes()
|
||||
}
|
||||
|
||||
func IsValidXML(data []byte) bool {
|
||||
return xml.Unmarshal(data, new(interface{})) == nil
|
||||
}
|
||||
|
||||
func filterXmlElement(element *etree.Element) {
|
||||
for i, attribute := range element.Attr {
|
||||
if isFieldNameSensitive(attribute.Key) {
|
||||
element.Attr[i].Value = maskedFieldPlaceholderValue
|
||||
}
|
||||
}
|
||||
if element.ChildElements() == nil || len(element.ChildElements()) == 0 {
|
||||
if isFieldNameSensitive(element.Tag) {
|
||||
element.SetText(maskedFieldPlaceholderValue)
|
||||
}
|
||||
} else {
|
||||
for _, element := range element.ChildElements() {
|
||||
filterXmlElement(element)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func filterJsonBody(bytes []byte) ([]byte, error) {
|
||||
var bodyJsonMap map[string] interface{}
|
||||
err := json.Unmarshal(bytes ,&bodyJsonMap)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
filterJsonMap(bodyJsonMap)
|
||||
return json.Marshal(bodyJsonMap)
|
||||
}
|
||||
|
||||
func filterJsonMap(jsonMap map[string] interface{}) {
|
||||
for key, value := range jsonMap {
|
||||
// Do not replace nil values with maskedFieldPlaceholderValue
|
||||
if value == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
nestedMap, isNested := value.(map[string] interface{})
|
||||
if isNested {
|
||||
filterJsonMap(nestedMap)
|
||||
} else {
|
||||
if isFieldNameSensitive(key) {
|
||||
jsonMap[key] = maskedFieldPlaceholderValue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// receives string representing url, returns string url without sensitive query param values (http://service/api?userId=bob&password=123&type=login -> http://service/api?userId=[REDACTED]&password=[REDACTED]&type=login)
|
||||
func filterUrl(originalUrl string) string {
|
||||
parsedUrl, err := url.Parse(originalUrl)
|
||||
if err != nil {
|
||||
return fmt.Sprintf("http://%s", maskedFieldPlaceholderValue)
|
||||
} else {
|
||||
if len(parsedUrl.RawQuery) > 0 {
|
||||
newQueryArgs := make([]string, 0)
|
||||
for urlQueryParamName, urlQueryParamValues := range parsedUrl.Query() {
|
||||
newValues := urlQueryParamValues
|
||||
if isFieldNameSensitive(urlQueryParamName) {
|
||||
newValues = []string {maskedFieldPlaceholderValue}
|
||||
}
|
||||
for _, paramValue := range newValues {
|
||||
newQueryArgs = append(newQueryArgs, fmt.Sprintf("%s=%s", urlQueryParamName, paramValue))
|
||||
}
|
||||
}
|
||||
|
||||
parsedUrl.RawQuery = strings.Join(newQueryArgs, "&")
|
||||
}
|
||||
|
||||
return parsedUrl.String()
|
||||
}
|
||||
}
|
||||
@@ -5,12 +5,14 @@ import (
|
||||
"compress/zlib"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/google/martian/har"
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
tapApi "github.com/up9inc/mizu/tap/api"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"mizuserver/pkg/database"
|
||||
"mizuserver/pkg/models"
|
||||
"mizuserver/pkg/utils"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
@@ -129,21 +131,33 @@ func UploadEntriesImpl(token string, model string, envPrefix string, sleepInterv
|
||||
for {
|
||||
timestampTo := time.Now().UnixNano() / int64(time.Millisecond)
|
||||
rlog.Infof("Getting entries from %v, to %v\n", timestampFrom, timestampTo)
|
||||
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo)
|
||||
protocolFilter := "http"
|
||||
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo, &protocolFilter)
|
||||
|
||||
if len(entriesArray) > 0 {
|
||||
|
||||
fullEntriesExtra := make([]models.FullEntryDetailsExtra, 0)
|
||||
result := make([]har.Entry, 0)
|
||||
for _, data := range entriesArray {
|
||||
harEntry := models.FullEntryDetailsExtra{}
|
||||
if err := models.GetEntry(&data, &harEntry); err != nil {
|
||||
var pair tapApi.RequestResponsePair
|
||||
if err := json.Unmarshal([]byte(data.Entry), &pair); err != nil {
|
||||
continue
|
||||
}
|
||||
fullEntriesExtra = append(fullEntriesExtra, harEntry)
|
||||
harEntry, err := utils.NewEntry(&pair)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if data.ResolvedSource != "" {
|
||||
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-source", Value: data.ResolvedSource})
|
||||
}
|
||||
if data.ResolvedDestination != "" {
|
||||
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-destination", Value: data.ResolvedDestination})
|
||||
harEntry.Request.URL = utils.SetHostname(harEntry.Request.URL, data.ResolvedDestination)
|
||||
}
|
||||
result = append(result, *harEntry)
|
||||
}
|
||||
rlog.Infof("About to upload %v entries\n", len(fullEntriesExtra))
|
||||
|
||||
body, jMarshalErr := json.Marshal(fullEntriesExtra)
|
||||
rlog.Infof("About to upload %v entries\n", len(result))
|
||||
|
||||
body, jMarshalErr := json.Marshal(result)
|
||||
if jMarshalErr != nil {
|
||||
analyzeInformation.Reset()
|
||||
rlog.Infof("Stopping analyzing")
|
||||
|
||||
259
agent/pkg/utils/har.go
Normal file
259
agent/pkg/utils/har.go
Normal file
@@ -0,0 +1,259 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/martian/har"
|
||||
"github.com/up9inc/mizu/tap"
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
|
||||
// Keep it because we might want cookies in the future
|
||||
//func BuildCookies(rawCookies []interface{}) []har.Cookie {
|
||||
// cookies := make([]har.Cookie, 0, len(rawCookies))
|
||||
//
|
||||
// for _, cookie := range rawCookies {
|
||||
// c := cookie.(map[string]interface{})
|
||||
// expiresStr := ""
|
||||
// if c["expires"] != nil {
|
||||
// expiresStr = c["expires"].(string)
|
||||
// }
|
||||
// expires, _ := time.Parse(time.RFC3339, expiresStr)
|
||||
// httpOnly := false
|
||||
// if c["httponly"] != nil {
|
||||
// httpOnly, _ = strconv.ParseBool(c["httponly"].(string))
|
||||
// }
|
||||
// secure := false
|
||||
// if c["secure"] != nil {
|
||||
// secure, _ = strconv.ParseBool(c["secure"].(string))
|
||||
// }
|
||||
// path := ""
|
||||
// if c["path"] != nil {
|
||||
// path = c["path"].(string)
|
||||
// }
|
||||
// domain := ""
|
||||
// if c["domain"] != nil {
|
||||
// domain = c["domain"].(string)
|
||||
// }
|
||||
//
|
||||
// cookies = append(cookies, har.Cookie{
|
||||
// Name: c["name"].(string),
|
||||
// Value: c["value"].(string),
|
||||
// Path: path,
|
||||
// Domain: domain,
|
||||
// HTTPOnly: httpOnly,
|
||||
// Secure: secure,
|
||||
// Expires: expires,
|
||||
// Expires8601: expiresStr,
|
||||
// })
|
||||
// }
|
||||
//
|
||||
// return cookies
|
||||
//}
|
||||
|
||||
func BuildHeaders(rawHeaders []interface{}) ([]har.Header, string, string, string, string, string) {
|
||||
var host, scheme, authority, path, status string
|
||||
headers := make([]har.Header, 0, len(rawHeaders))
|
||||
|
||||
for _, header := range rawHeaders {
|
||||
h := header.(map[string]interface{})
|
||||
|
||||
headers = append(headers, har.Header{
|
||||
Name: h["name"].(string),
|
||||
Value: h["value"].(string),
|
||||
})
|
||||
|
||||
if h["name"] == "Host" {
|
||||
host = h["value"].(string)
|
||||
}
|
||||
if h["name"] == ":authority" {
|
||||
authority = h["value"].(string)
|
||||
}
|
||||
if h["name"] == ":scheme" {
|
||||
scheme = h["value"].(string)
|
||||
}
|
||||
if h["name"] == ":path" {
|
||||
path = h["value"].(string)
|
||||
}
|
||||
if h["name"] == ":status" {
|
||||
path = h["value"].(string)
|
||||
}
|
||||
}
|
||||
|
||||
return headers, host, scheme, authority, path, status
|
||||
}
|
||||
|
||||
func BuildPostParams(rawParams []interface{}) []har.Param {
|
||||
params := make([]har.Param, 0, len(rawParams))
|
||||
for _, param := range rawParams {
|
||||
p := param.(map[string]interface{})
|
||||
name := ""
|
||||
if p["name"] != nil {
|
||||
name = p["name"].(string)
|
||||
}
|
||||
value := ""
|
||||
if p["value"] != nil {
|
||||
value = p["value"].(string)
|
||||
}
|
||||
fileName := ""
|
||||
if p["fileName"] != nil {
|
||||
fileName = p["fileName"].(string)
|
||||
}
|
||||
contentType := ""
|
||||
if p["contentType"] != nil {
|
||||
contentType = p["contentType"].(string)
|
||||
}
|
||||
|
||||
params = append(params, har.Param{
|
||||
Name: name,
|
||||
Value: value,
|
||||
Filename: fileName,
|
||||
ContentType: contentType,
|
||||
})
|
||||
}
|
||||
|
||||
return params
|
||||
}
|
||||
|
||||
func NewRequest(request *api.GenericMessage) (harRequest *har.Request, err error) {
|
||||
reqDetails := request.Payload.(map[string]interface{})["details"].(map[string]interface{})
|
||||
|
||||
headers, host, scheme, authority, path, _ := BuildHeaders(reqDetails["headers"].([]interface{}))
|
||||
cookies := make([]har.Cookie, 0) // BuildCookies(reqDetails["cookies"].([]interface{}))
|
||||
|
||||
postData, _ := reqDetails["postData"].(map[string]interface{})
|
||||
mimeType, _ := postData["mimeType"]
|
||||
if mimeType == nil || len(mimeType.(string)) == 0 {
|
||||
mimeType = "text/html"
|
||||
}
|
||||
text, _ := postData["text"]
|
||||
postDataText := ""
|
||||
if text != nil {
|
||||
postDataText = text.(string)
|
||||
}
|
||||
|
||||
queryString := make([]har.QueryString, 0)
|
||||
for _, _qs := range reqDetails["queryString"].([]interface{}) {
|
||||
qs := _qs.(map[string]interface{})
|
||||
queryString = append(queryString, har.QueryString{
|
||||
Name: qs["name"].(string),
|
||||
Value: qs["value"].(string),
|
||||
})
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("http://%s%s", host, reqDetails["url"].(string))
|
||||
if strings.HasPrefix(mimeType.(string), "application/grpc") {
|
||||
url = fmt.Sprintf("%s://%s%s", scheme, authority, path)
|
||||
}
|
||||
|
||||
harParams := make([]har.Param, 0)
|
||||
if postData["params"] != nil {
|
||||
harParams = BuildPostParams(postData["params"].([]interface{}))
|
||||
}
|
||||
|
||||
harRequest = &har.Request{
|
||||
Method: reqDetails["method"].(string),
|
||||
URL: url,
|
||||
HTTPVersion: reqDetails["httpVersion"].(string),
|
||||
HeadersSize: -1,
|
||||
BodySize: int64(bytes.NewBufferString(postDataText).Len()),
|
||||
QueryString: queryString,
|
||||
Headers: headers,
|
||||
Cookies: cookies,
|
||||
PostData: &har.PostData{
|
||||
MimeType: mimeType.(string),
|
||||
Params: harParams,
|
||||
Text: postDataText,
|
||||
},
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func NewResponse(response *api.GenericMessage) (harResponse *har.Response, err error) {
|
||||
resDetails := response.Payload.(map[string]interface{})["details"].(map[string]interface{})
|
||||
|
||||
headers, _, _, _, _, _status := BuildHeaders(resDetails["headers"].([]interface{}))
|
||||
cookies := make([]har.Cookie, 0) // BuildCookies(resDetails["cookies"].([]interface{}))
|
||||
|
||||
content, _ := resDetails["content"].(map[string]interface{})
|
||||
mimeType, _ := content["mimeType"]
|
||||
if mimeType == nil || len(mimeType.(string)) == 0 {
|
||||
mimeType = "text/html"
|
||||
}
|
||||
encoding, _ := content["encoding"]
|
||||
text, _ := content["text"]
|
||||
bodyText := ""
|
||||
if text != nil {
|
||||
bodyText = text.(string)
|
||||
}
|
||||
|
||||
harContent := &har.Content{
|
||||
Encoding: encoding.(string),
|
||||
MimeType: mimeType.(string),
|
||||
Text: []byte(bodyText),
|
||||
Size: int64(len(bodyText)),
|
||||
}
|
||||
|
||||
status := int(resDetails["status"].(float64))
|
||||
if strings.HasPrefix(mimeType.(string), "application/grpc") {
|
||||
status, err = strconv.Atoi(_status)
|
||||
if err != nil {
|
||||
tap.SilentError("convert-response-status-for-har", "Failed converting status to int %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("failed converting response status to int for HAR")
|
||||
}
|
||||
}
|
||||
|
||||
harResponse = &har.Response{
|
||||
HTTPVersion: resDetails["httpVersion"].(string),
|
||||
Status: status,
|
||||
StatusText: resDetails["statusText"].(string),
|
||||
HeadersSize: -1,
|
||||
BodySize: int64(bytes.NewBufferString(bodyText).Len()),
|
||||
Headers: headers,
|
||||
Cookies: cookies,
|
||||
Content: harContent,
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func NewEntry(pair *api.RequestResponsePair) (*har.Entry, error) {
|
||||
harRequest, err := NewRequest(&pair.Request)
|
||||
if err != nil {
|
||||
tap.SilentError("convert-request-to-har", "Failed converting request to HAR %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("failed converting request to HAR")
|
||||
}
|
||||
|
||||
harResponse, err := NewResponse(&pair.Response)
|
||||
if err != nil {
|
||||
fmt.Printf("err: %+v\n", err)
|
||||
tap.SilentError("convert-response-to-har", "Failed converting response to HAR %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("failed converting response to HAR")
|
||||
}
|
||||
|
||||
totalTime := pair.Response.CaptureTime.Sub(pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
|
||||
if totalTime < 1 {
|
||||
totalTime = 1
|
||||
}
|
||||
|
||||
harEntry := har.Entry{
|
||||
StartedDateTime: pair.Request.CaptureTime,
|
||||
Time: totalTime,
|
||||
Request: harRequest,
|
||||
Response: harResponse,
|
||||
Cache: &har.Cache{},
|
||||
Timings: &har.Timings{
|
||||
Send: -1,
|
||||
Wait: -1,
|
||||
Receive: totalTime,
|
||||
},
|
||||
}
|
||||
|
||||
return &harEntry, nil
|
||||
}
|
||||
12
build_extensions.sh
Executable file
12
build_extensions.sh
Executable file
@@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
for f in tap/extensions/*; do
|
||||
if [ -d "$f" ]; then
|
||||
extension=$(basename $f) && \
|
||||
cd tap/extensions/${extension} && \
|
||||
go build -buildmode=plugin -o ../${extension}.so . && \
|
||||
cd ../../.. && \
|
||||
mkdir -p agent/build/extensions && \
|
||||
cp tap/extensions/${extension}.so agent/build/extensions
|
||||
fi
|
||||
done
|
||||
@@ -1,10 +1,10 @@
|
||||
package apiserver
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/up9inc/mizu/cli/config"
|
||||
"github.com/up9inc/mizu/cli/logger"
|
||||
"github.com/up9inc/mizu/cli/uiUtils"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
@@ -18,18 +18,27 @@ import (
|
||||
type apiServerProvider struct {
|
||||
url string
|
||||
isReady bool
|
||||
retries int
|
||||
}
|
||||
|
||||
var Provider = apiServerProvider{}
|
||||
var Provider = apiServerProvider{retries: config.GetIntEnvConfig(config.ApiServerRetries, 20)}
|
||||
|
||||
func (provider *apiServerProvider) InitAndTestConnection(url string, retries int) error {
|
||||
func (provider *apiServerProvider) InitAndTestConnection(url string) error {
|
||||
healthUrl := fmt.Sprintf("%s/", url)
|
||||
retriesLeft := retries
|
||||
retriesLeft := provider.retries
|
||||
for retriesLeft > 0 {
|
||||
if response, err := http.Get(healthUrl); err != nil {
|
||||
logger.Log.Debugf("[ERROR] failed connecting to api server %v", err)
|
||||
} else if response.StatusCode != 200 {
|
||||
logger.Log.Debugf("can't connect to api server yet, response status code %v", response.StatusCode)
|
||||
responseBody := ""
|
||||
data, readErr := ioutil.ReadAll(response.Body)
|
||||
if readErr == nil {
|
||||
responseBody = string(data)
|
||||
}
|
||||
|
||||
logger.Log.Debugf("can't connect to api server yet, response status code: %v, body: %v", response.StatusCode, responseBody)
|
||||
|
||||
response.Body.Close()
|
||||
} else {
|
||||
logger.Log.Debugf("connection test to api server passed successfully")
|
||||
break
|
||||
@@ -40,7 +49,7 @@ func (provider *apiServerProvider) InitAndTestConnection(url string, retries int
|
||||
|
||||
if retriesLeft == 0 {
|
||||
provider.isReady = false
|
||||
return fmt.Errorf("couldn't reach the api server after %v retries", retries)
|
||||
return fmt.Errorf("couldn't reach the api server after %v retries", provider.retries)
|
||||
}
|
||||
provider.url = url
|
||||
provider.isReady = true
|
||||
@@ -121,28 +130,6 @@ func (provider *apiServerProvider) GetGeneralStats() (map[string]interface{}, er
|
||||
return generalStats, nil
|
||||
}
|
||||
|
||||
func (provider *apiServerProvider) GetHars(fromTimestamp int, toTimestamp int) (*zip.Reader, error) {
|
||||
if !provider.isReady {
|
||||
return nil, fmt.Errorf("trying to reach api server when not initialized yet")
|
||||
}
|
||||
resp, err := http.Get(fmt.Sprintf("%s/api/har?from=%v&to=%v", provider.url, fromTimestamp, toTimestamp))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed getting har from api server %w", err)
|
||||
}
|
||||
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed reading hars %w", err)
|
||||
}
|
||||
|
||||
zipReader, err := zip.NewReader(bytes.NewReader(body), int64(len(body)))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed craeting zip reader %w", err)
|
||||
}
|
||||
return zipReader, nil
|
||||
}
|
||||
|
||||
func (provider *apiServerProvider) GetVersion() (string, error) {
|
||||
if !provider.isReady {
|
||||
|
||||
@@ -3,6 +3,12 @@ package cmd
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"runtime"
|
||||
"syscall"
|
||||
|
||||
"github.com/up9inc/mizu/cli/config"
|
||||
"github.com/up9inc/mizu/cli/config/configStructs"
|
||||
"github.com/up9inc/mizu/cli/errormessage"
|
||||
@@ -10,9 +16,6 @@ import (
|
||||
"github.com/up9inc/mizu/cli/logger"
|
||||
"github.com/up9inc/mizu/cli/mizu"
|
||||
"github.com/up9inc/mizu/cli/uiUtils"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
func GetApiServerUrl() string {
|
||||
@@ -26,6 +29,8 @@ func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, cancel
|
||||
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
|
||||
cancel()
|
||||
}
|
||||
|
||||
logger.Log.Debugf("proxy ended")
|
||||
}
|
||||
|
||||
func waitForFinish(ctx context.Context, cancel context.CancelFunc) {
|
||||
@@ -43,3 +48,22 @@ func waitForFinish(ctx context.Context, cancel context.CancelFunc) {
|
||||
cancel()
|
||||
}
|
||||
}
|
||||
|
||||
func openBrowser(url string) {
|
||||
var err error
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "linux":
|
||||
err = exec.Command("xdg-open", url).Start()
|
||||
case "windows":
|
||||
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
|
||||
case "darwin":
|
||||
err = exec.Command("open", url).Start()
|
||||
default:
|
||||
err = fmt.Errorf("unsupported platform")
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
logger.Log.Errorf("error while opening browser, %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/creasty/defaults"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/up9inc/mizu/cli/apiserver"
|
||||
"github.com/up9inc/mizu/cli/config"
|
||||
"github.com/up9inc/mizu/cli/config/configStructs"
|
||||
"github.com/up9inc/mizu/cli/logger"
|
||||
"github.com/up9inc/mizu/cli/mizu/version"
|
||||
"github.com/up9inc/mizu/cli/telemetry"
|
||||
"github.com/up9inc/mizu/cli/uiUtils"
|
||||
)
|
||||
|
||||
var fetchCmd = &cobra.Command{
|
||||
Use: "fetch",
|
||||
Short: "Download recorded traffic to files",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
go telemetry.ReportRun("fetch", config.Config.Fetch)
|
||||
|
||||
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl(), 1); err != nil {
|
||||
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, make sure one running")
|
||||
return nil
|
||||
}
|
||||
|
||||
if isCompatible, err := version.CheckVersionCompatibility(); err != nil {
|
||||
return err
|
||||
} else if !isCompatible {
|
||||
return nil
|
||||
}
|
||||
RunMizuFetch()
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(fetchCmd)
|
||||
|
||||
defaultFetchConfig := configStructs.FetchConfig{}
|
||||
defaults.Set(&defaultFetchConfig)
|
||||
|
||||
fetchCmd.Flags().StringP(configStructs.DirectoryFetchName, "d", defaultFetchConfig.Directory, "Provide a custom directory for fetched entries")
|
||||
fetchCmd.Flags().Int(configStructs.FromTimestampFetchName, defaultFetchConfig.FromTimestamp, "Custom start timestamp for fetched entries")
|
||||
fetchCmd.Flags().Int(configStructs.ToTimestampFetchName, defaultFetchConfig.ToTimestamp, "Custom end timestamp fetched entries")
|
||||
fetchCmd.Flags().Uint16P(configStructs.GuiPortFetchName, "p", defaultFetchConfig.GuiPort, "Provide a custom port for the web interface webserver")
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/up9inc/mizu/cli/apiserver"
|
||||
"github.com/up9inc/mizu/cli/config"
|
||||
"github.com/up9inc/mizu/cli/logger"
|
||||
"github.com/up9inc/mizu/cli/mizu/fsUtils"
|
||||
"github.com/up9inc/mizu/cli/uiUtils"
|
||||
)
|
||||
|
||||
func RunMizuFetch() {
|
||||
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl(), 5); err != nil {
|
||||
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
|
||||
}
|
||||
|
||||
zipReader, err := apiserver.Provider.GetHars(config.Config.Fetch.FromTimestamp, config.Config.Fetch.ToTimestamp)
|
||||
if err != nil {
|
||||
logger.Log.Errorf("Failed fetch data from API server %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := fsUtils.Unzip(zipReader, config.Config.Fetch.Directory); err != nil {
|
||||
logger.Log.Debugf("[ERROR] failed unzip %v", err)
|
||||
}
|
||||
}
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/up9inc/mizu/cli/mizu/fsUtils"
|
||||
"github.com/up9inc/mizu/cli/mizu/version"
|
||||
"github.com/up9inc/mizu/cli/uiUtils"
|
||||
"time"
|
||||
)
|
||||
|
||||
var rootCmd = &cobra.Command{
|
||||
@@ -34,9 +35,12 @@ func init() {
|
||||
}
|
||||
|
||||
func printNewVersionIfNeeded(versionChan chan string) {
|
||||
versionMsg := <-versionChan
|
||||
if versionMsg != "" {
|
||||
logger.Log.Infof(uiUtils.Yellow, versionMsg)
|
||||
select {
|
||||
case versionMsg := <-versionChan:
|
||||
if versionMsg != "" {
|
||||
logger.Log.Infof(uiUtils.Yellow, versionMsg)
|
||||
}
|
||||
case <-time.After(2 * time.Second):
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -64,7 +64,6 @@ func init() {
|
||||
tapCmd.Flags().StringSliceP(configStructs.PlainTextFilterRegexesTapName, "r", defaultTapConfig.PlainTextFilterRegexes, "List of regex expressions that are used to filter matching values from text/plain http bodies")
|
||||
tapCmd.Flags().Bool(configStructs.DisableRedactionTapName, defaultTapConfig.DisableRedaction, "Disables redaction of potentially sensitive request/response headers and body values")
|
||||
tapCmd.Flags().String(configStructs.HumanMaxEntriesDBSizeTapName, defaultTapConfig.HumanMaxEntriesDBSize, "Override the default max entries db size")
|
||||
tapCmd.Flags().String(configStructs.DirectionTapName, defaultTapConfig.Direction, "Record traffic that goes in this direction (relative to the tapped pod): in/any")
|
||||
tapCmd.Flags().Bool(configStructs.DryRunTapName, defaultTapConfig.DryRun, "Preview of all pods matching the regex, without tapping them")
|
||||
tapCmd.Flags().String(configStructs.EnforcePolicyFile, defaultTapConfig.EnforcePolicyFile, "Yaml file with policy rules")
|
||||
}
|
||||
|
||||
@@ -3,6 +3,11 @@ package cmd
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/up9inc/mizu/cli/apiserver"
|
||||
"github.com/up9inc/mizu/cli/config"
|
||||
"github.com/up9inc/mizu/cli/config/configStructs"
|
||||
@@ -19,10 +24,6 @@ import (
|
||||
yaml "gopkg.in/yaml.v3"
|
||||
core "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -237,7 +238,6 @@ func updateMizuTappers(ctx context.Context, kubernetesProvider *kubernetes.Provi
|
||||
fmt.Sprintf("%s.%s.svc.cluster.local", state.apiServerService.Name, state.apiServerService.Namespace),
|
||||
nodeToTappedPodIPMap,
|
||||
serviceAccountName,
|
||||
config.Config.Tap.TapOutgoing(),
|
||||
config.Config.Tap.TapperResources,
|
||||
config.Config.ImagePullPolicy(),
|
||||
); err != nil {
|
||||
@@ -379,13 +379,28 @@ func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Pro
|
||||
|
||||
for {
|
||||
select {
|
||||
case pod := <-added:
|
||||
case pod, ok := <-added:
|
||||
if !ok {
|
||||
added = nil
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Log.Debugf("Added matching pod %s, ns: %s", pod.Name, pod.Namespace)
|
||||
restartTappersDebouncer.SetOn()
|
||||
case pod := <-removed:
|
||||
case pod, ok := <-removed:
|
||||
if !ok {
|
||||
removed = nil
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Log.Debugf("Removed matching pod %s, ns: %s", pod.Name, pod.Namespace)
|
||||
restartTappersDebouncer.SetOn()
|
||||
case pod := <-modified:
|
||||
case pod, ok := <-modified:
|
||||
if !ok {
|
||||
modified = nil
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Log.Debugf("Modified matching pod %s, ns: %s, phase: %s, ip: %s", pod.Name, pod.Namespace, pod.Status.Phase, pod.Status.PodIP)
|
||||
// Act only if the modified pod has already obtained an IP address.
|
||||
// After filtering for IPs, on a normal pod restart this includes the following events:
|
||||
@@ -396,8 +411,12 @@ func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Pro
|
||||
if pod.Status.PodIP != "" {
|
||||
restartTappersDebouncer.SetOn()
|
||||
}
|
||||
case err, ok := <-errorChan:
|
||||
if !ok {
|
||||
errorChan = nil
|
||||
continue
|
||||
}
|
||||
|
||||
case err := <-errorChan:
|
||||
logger.Log.Debugf("Watching pods loop, got error %v, stopping `restart tappers debouncer`", err)
|
||||
restartTappersDebouncer.Cancel()
|
||||
// TODO: Does this also perform cleanup?
|
||||
@@ -477,45 +496,63 @@ func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provi
|
||||
timeAfter := time.After(25 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
logger.Log.Debugf("Watching API Server pod loop, ctx done")
|
||||
return
|
||||
case <-added:
|
||||
case _, ok := <-added:
|
||||
if !ok {
|
||||
added = nil
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Log.Debugf("Watching API Server pod loop, added")
|
||||
continue
|
||||
case <-removed:
|
||||
case _, ok := <-removed:
|
||||
if !ok {
|
||||
removed = nil
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Log.Infof("%s removed", mizu.ApiServerPodName)
|
||||
cancel()
|
||||
return
|
||||
case modifiedPod := <-modified:
|
||||
if modifiedPod == nil {
|
||||
logger.Log.Debugf("Watching API Server pod loop, modifiedPod with nil")
|
||||
case modifiedPod, ok := <-modified:
|
||||
if !ok {
|
||||
modified = nil
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Log.Debugf("Watching API Server pod loop, modified: %v", modifiedPod.Status.Phase)
|
||||
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
|
||||
isPodReady = true
|
||||
go startProxyReportErrorIfAny(kubernetesProvider, cancel)
|
||||
|
||||
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl(), 20); err != nil {
|
||||
url := GetApiServerUrl()
|
||||
if err := apiserver.Provider.InitAndTestConnection(url); err != nil {
|
||||
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
|
||||
cancel()
|
||||
break
|
||||
}
|
||||
logger.Log.Infof("Mizu is available at %s\n", GetApiServerUrl())
|
||||
logger.Log.Infof("Mizu is available at %s\n", url)
|
||||
openBrowser(url)
|
||||
requestForAnalysisIfNeeded()
|
||||
if err := apiserver.Provider.ReportTappedPods(state.currentlyTappedPods); err != nil {
|
||||
logger.Log.Debugf("[Error] failed update tapped pods %v", err)
|
||||
}
|
||||
}
|
||||
case _, ok := <-errorChan:
|
||||
if !ok {
|
||||
errorChan = nil
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Log.Debugf("[ERROR] Agent creation, watching %v namespace", config.Config.MizuResourcesNamespace)
|
||||
cancel()
|
||||
|
||||
case <-timeAfter:
|
||||
if !isPodReady {
|
||||
logger.Log.Errorf(uiUtils.Error, "Mizu API server was not ready in time")
|
||||
cancel()
|
||||
}
|
||||
case <-errorChan:
|
||||
logger.Log.Debugf("[ERROR] Agent creation, watching %v namespace", config.Config.MizuResourcesNamespace)
|
||||
cancel()
|
||||
case <-ctx.Done():
|
||||
logger.Log.Debugf("Watching API Server pod loop, ctx done")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,8 @@ package cmd
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/up9inc/mizu/cli/apiserver"
|
||||
"github.com/up9inc/mizu/cli/config"
|
||||
"github.com/up9inc/mizu/cli/kubernetes"
|
||||
@@ -10,7 +12,6 @@ import (
|
||||
"github.com/up9inc/mizu/cli/mizu"
|
||||
"github.com/up9inc/mizu/cli/mizu/version"
|
||||
"github.com/up9inc/mizu/cli/uiUtils"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
func runMizuView() {
|
||||
@@ -35,7 +36,9 @@ func runMizuView() {
|
||||
return
|
||||
}
|
||||
|
||||
response, err := http.Get(fmt.Sprintf("%s/", GetApiServerUrl()))
|
||||
url := GetApiServerUrl()
|
||||
|
||||
response, err := http.Get(fmt.Sprintf("%s/", url))
|
||||
if err == nil && response.StatusCode == 200 {
|
||||
logger.Log.Infof("Found a running service %s and open port %d", mizu.ApiServerPodName, config.Config.View.GuiPort)
|
||||
return
|
||||
@@ -43,12 +46,13 @@ func runMizuView() {
|
||||
logger.Log.Infof("Establishing connection to k8s cluster...")
|
||||
go startProxyReportErrorIfAny(kubernetesProvider, cancel)
|
||||
|
||||
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl(), 10); err != nil {
|
||||
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl()); err != nil {
|
||||
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
|
||||
return
|
||||
}
|
||||
|
||||
logger.Log.Infof("Mizu is available at %s\n", GetApiServerUrl())
|
||||
logger.Log.Infof("Mizu is available at %s\n", url)
|
||||
openBrowser(url)
|
||||
if isCompatible, err := version.CheckVersionCompatibility(); err != nil {
|
||||
logger.Log.Errorf("Failed to check versions compatibility %v", err)
|
||||
cancel()
|
||||
|
||||
@@ -37,11 +37,13 @@ func InitConfig(cmd *cobra.Command) error {
|
||||
return err
|
||||
}
|
||||
|
||||
configFilePath := cmd.Flags().Lookup(ConfigFilePathCommandName).Value.String()
|
||||
|
||||
configFilePathFlag := cmd.Flags().Lookup(ConfigFilePathCommandName)
|
||||
configFilePath := configFilePathFlag.Value.String()
|
||||
if err := mergeConfigFile(configFilePath); err != nil {
|
||||
return fmt.Errorf("invalid config, %w\n"+
|
||||
"you can regenerate the file by removing it (%v) and using `mizu config -r`", err, configFilePath)
|
||||
if configFilePathFlag.Changed || !os.IsNotExist(err) {
|
||||
return fmt.Errorf("invalid config, %w\n"+
|
||||
"you can regenerate the file by removing it (%v) and using `mizu config -r`", err, configFilePath)
|
||||
}
|
||||
}
|
||||
|
||||
cmd.Flags().Visit(initFlag)
|
||||
@@ -67,7 +69,7 @@ func GetConfigWithDefaults() (string, error) {
|
||||
func mergeConfigFile(configFilePath string) error {
|
||||
reader, openErr := os.Open(configFilePath)
|
||||
if openErr != nil {
|
||||
return nil
|
||||
return openErr
|
||||
}
|
||||
|
||||
buf, readErr := ioutil.ReadAll(reader)
|
||||
|
||||
@@ -18,7 +18,6 @@ const (
|
||||
|
||||
type ConfigStruct struct {
|
||||
Tap configStructs.TapConfig `yaml:"tap"`
|
||||
Fetch configStructs.FetchConfig `yaml:"fetch"`
|
||||
Version configStructs.VersionConfig `yaml:"version"`
|
||||
View configStructs.ViewConfig `yaml:"view"`
|
||||
Logs configStructs.LogsConfig `yaml:"logs"`
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
package configStructs
|
||||
|
||||
const (
|
||||
DirectoryFetchName = "directory"
|
||||
FromTimestampFetchName = "from"
|
||||
ToTimestampFetchName = "to"
|
||||
GuiPortFetchName = "gui-port"
|
||||
)
|
||||
|
||||
type FetchConfig struct {
|
||||
Directory string `yaml:"directory" default:"."`
|
||||
FromTimestamp int `yaml:"from" default:"0"`
|
||||
ToTimestamp int `yaml:"to" default:"0"`
|
||||
GuiPort uint16 `yaml:"gui-port" default:"8899"`
|
||||
}
|
||||
@@ -3,10 +3,8 @@ package configStructs
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/up9inc/mizu/shared/units"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -17,7 +15,6 @@ const (
|
||||
PlainTextFilterRegexesTapName = "regex-masking"
|
||||
DisableRedactionTapName = "no-redact"
|
||||
HumanMaxEntriesDBSizeTapName = "max-entries-db-size"
|
||||
DirectionTapName = "direction"
|
||||
DryRunTapName = "dry-run"
|
||||
EnforcePolicyFile = "test-rules"
|
||||
)
|
||||
@@ -34,7 +31,6 @@ type TapConfig struct {
|
||||
HealthChecksUserAgentHeaders []string `yaml:"ignored-user-agents"`
|
||||
DisableRedaction bool `yaml:"no-redact" default:"false"`
|
||||
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
|
||||
Direction string `yaml:"direction" default:"in"`
|
||||
DryRun bool `yaml:"dry-run" default:"false"`
|
||||
EnforcePolicyFile string `yaml:"test-rules"`
|
||||
ApiServerResources Resources `yaml:"api-server-resources"`
|
||||
@@ -53,15 +49,6 @@ func (config *TapConfig) PodRegex() *regexp.Regexp {
|
||||
return podRegex
|
||||
}
|
||||
|
||||
func (config *TapConfig) TapOutgoing() bool {
|
||||
directionLowerCase := strings.ToLower(config.Direction)
|
||||
if directionLowerCase == "any" {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (config *TapConfig) MaxEntriesDBSizeBytes() int64 {
|
||||
maxEntriesDBSizeBytes, _ := units.HumanReadableToBytes(config.HumanMaxEntriesDBSize)
|
||||
return maxEntriesDBSizeBytes
|
||||
@@ -78,10 +65,5 @@ func (config *TapConfig) Validate() error {
|
||||
return errors.New(fmt.Sprintf("Could not parse --%s value %s", HumanMaxEntriesDBSizeTapName, config.HumanMaxEntriesDBSize))
|
||||
}
|
||||
|
||||
directionLowerCase := strings.ToLower(config.Direction)
|
||||
if directionLowerCase != "any" && directionLowerCase != "in" {
|
||||
return errors.New(fmt.Sprintf("%s is not a valid value for flag --%s. Acceptable values are in/any.", config.Direction, DirectionTapName))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
24
cli/config/envConfig.go
Normal file
24
cli/config/envConfig.go
Normal file
@@ -0,0 +1,24 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
const (
|
||||
ApiServerRetries = "API_SERVER_RETRIES"
|
||||
)
|
||||
|
||||
func GetIntEnvConfig(key string, defaultValue int) int {
|
||||
value := os.Getenv(key)
|
||||
if value == "" {
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
intValue, err := strconv.Atoi(value)
|
||||
if err != nil {
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
return intValue
|
||||
}
|
||||
@@ -7,15 +7,17 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/up9inc/mizu/cli/config/configStructs"
|
||||
"github.com/up9inc/mizu/cli/logger"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
|
||||
"github.com/up9inc/mizu/cli/config/configStructs"
|
||||
"github.com/up9inc/mizu/cli/logger"
|
||||
|
||||
"io"
|
||||
|
||||
"github.com/up9inc/mizu/cli/mizu"
|
||||
"github.com/up9inc/mizu/shared"
|
||||
"io"
|
||||
core "k8s.io/api/core/v1"
|
||||
rbac "k8s.io/api/rbac/v1"
|
||||
k8serrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
@@ -55,21 +57,21 @@ func NewProvider(kubeConfigPath string) (*Provider, error) {
|
||||
restClientConfig, err := kubernetesConfig.ClientConfig()
|
||||
if err != nil {
|
||||
if clientcmd.IsEmptyConfig(err) {
|
||||
return nil, fmt.Errorf("couldn't find the kube config file, or file is empty (%s)\n" +
|
||||
return nil, fmt.Errorf("couldn't find the kube config file, or file is empty (%s)\n"+
|
||||
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
|
||||
}
|
||||
if clientcmd.IsConfigurationInvalid(err) {
|
||||
return nil, fmt.Errorf("invalid kube config file (%s)\n" +
|
||||
return nil, fmt.Errorf("invalid kube config file (%s)\n"+
|
||||
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("error while using kube config (%s)\n" +
|
||||
return nil, fmt.Errorf("error while using kube config (%s)\n"+
|
||||
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
|
||||
}
|
||||
|
||||
clientSet, err := getClientSet(restClientConfig)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error while using kube config (%s)\n" +
|
||||
return nil, fmt.Errorf("error while using kube config (%s)\n"+
|
||||
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
|
||||
}
|
||||
|
||||
@@ -573,11 +575,11 @@ func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string,
|
||||
return nil
|
||||
}
|
||||
|
||||
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, tapOutgoing bool, resources configStructs.Resources, imagePullPolicy core.PullPolicy) error {
|
||||
logger.Log.Debugf("Applying %d tapper deamonsets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
|
||||
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, resources configStructs.Resources, imagePullPolicy core.PullPolicy) error {
|
||||
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
|
||||
|
||||
if len(nodeToTappedPodIPMap) == 0 {
|
||||
return fmt.Errorf("Daemon set %s must tap at least 1 pod", daemonSetName)
|
||||
return fmt.Errorf("daemon set %s must tap at least 1 pod", daemonSetName)
|
||||
}
|
||||
|
||||
nodeToTappedPodIPMapJsonStr, err := json.Marshal(nodeToTappedPodIPMap)
|
||||
@@ -590,9 +592,7 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
|
||||
"-i", "any",
|
||||
"--tap",
|
||||
"--api-server-address", fmt.Sprintf("ws://%s/wsTapper", apiServerPodIp),
|
||||
}
|
||||
if tapOutgoing {
|
||||
mizuCmd = append(mizuCmd, "--anydirection")
|
||||
"--nodefrag",
|
||||
}
|
||||
|
||||
agentContainer := applyconfcore.Container()
|
||||
@@ -604,6 +604,7 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
|
||||
agentContainer.WithEnv(
|
||||
applyconfcore.EnvVar().WithName(shared.HostModeEnvVar).WithValue("1"),
|
||||
applyconfcore.EnvVar().WithName(shared.TappedAddressesPerNodeDictEnvVar).WithValue(string(nodeToTappedPodIPMapJsonStr)),
|
||||
applyconfcore.EnvVar().WithName(shared.GoGCEnvVar).WithValue("12800"),
|
||||
)
|
||||
agentContainer.WithEnv(
|
||||
applyconfcore.EnvVar().WithName(shared.NodeNameEnvVar).WithValueFrom(
|
||||
@@ -739,6 +740,16 @@ func (provider *Provider) GetPodLogs(namespace string, podName string, ctx conte
|
||||
return str, nil
|
||||
}
|
||||
|
||||
func (provider *Provider) GetNamespaceEvents(namespace string, ctx context.Context) (string, error) {
|
||||
eventsOpts := metav1.ListOptions{}
|
||||
eventList, err := provider.clientSet.CoreV1().Events(namespace).List(ctx, eventsOpts)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error getting events on ns: %s, %w", namespace, err)
|
||||
}
|
||||
|
||||
return eventList.String(), nil
|
||||
}
|
||||
|
||||
func getClientSet(config *restclient.Config) (*kubernetes.Clientset, error) {
|
||||
clientSet, err := kubernetes.NewForConfig(config)
|
||||
if err != nil {
|
||||
|
||||
@@ -39,6 +39,7 @@ func StartProxy(kubernetesProvider *Provider, mizuPort uint16, mizuNamespace str
|
||||
server := http.Server{
|
||||
Handler: mux,
|
||||
}
|
||||
|
||||
return server.Serve(l)
|
||||
}
|
||||
|
||||
|
||||
@@ -39,22 +39,39 @@ func DumpLogs(provider *kubernetes.Provider, ctx context.Context, filePath strin
|
||||
} else {
|
||||
logger.Log.Debugf("Successfully read log length %d for pod: %s.%s", len(logs), pod.Namespace, pod.Name)
|
||||
}
|
||||
|
||||
if err := AddStrToZip(zipWriter, logs, fmt.Sprintf("%s.%s.log", pod.Namespace, pod.Name)); err != nil {
|
||||
logger.Log.Errorf("Failed write logs, %v", err)
|
||||
} else {
|
||||
logger.Log.Debugf("Successfully added log length %d from pod: %s.%s", len(logs), pod.Namespace, pod.Name)
|
||||
}
|
||||
}
|
||||
|
||||
events, err := provider.GetNamespaceEvents(config.Config.MizuResourcesNamespace, ctx)
|
||||
if err != nil {
|
||||
logger.Log.Debugf("Failed to get k8b events, %v", err)
|
||||
} else {
|
||||
logger.Log.Debugf("Successfully read events for k8b namespace: %s", config.Config.MizuResourcesNamespace)
|
||||
}
|
||||
|
||||
if err := AddStrToZip(zipWriter, events, fmt.Sprintf("%s_events.log", config.Config.MizuResourcesNamespace)); err != nil {
|
||||
logger.Log.Debugf("Failed write logs, %v", err)
|
||||
} else {
|
||||
logger.Log.Debugf("Successfully added events for k8b namespace: %s", config.Config.MizuResourcesNamespace)
|
||||
}
|
||||
|
||||
if err := AddFileToZip(zipWriter, config.Config.ConfigFilePath); err != nil {
|
||||
logger.Log.Debugf("Failed write file, %v", err)
|
||||
} else {
|
||||
logger.Log.Debugf("Successfully added file %s", config.Config.ConfigFilePath)
|
||||
}
|
||||
|
||||
if err := AddFileToZip(zipWriter, logger.GetLogFilePath()); err != nil {
|
||||
logger.Log.Debugf("Failed write file, %v", err)
|
||||
} else {
|
||||
logger.Log.Debugf("Successfully added file %s", logger.GetLogFilePath())
|
||||
}
|
||||
|
||||
logger.Log.Infof("You can find the zip file with all logs in %s\n", filePath)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/up9inc/mizu/cli/mizu"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-github/v37/github"
|
||||
@@ -37,6 +38,7 @@ func CheckNewerVersion(versionChan chan string) {
|
||||
latestRelease, _, err := client.Repositories.GetLatestRelease(context.Background(), "up9inc", "mizu")
|
||||
if err != nil {
|
||||
logger.Log.Debugf("[ERROR] Failed to get latest release")
|
||||
versionChan <- ""
|
||||
return
|
||||
}
|
||||
|
||||
@@ -49,12 +51,14 @@ func CheckNewerVersion(versionChan chan string) {
|
||||
}
|
||||
if versionFileUrl == "" {
|
||||
logger.Log.Debugf("[ERROR] Version file not found in the latest release")
|
||||
versionChan <- ""
|
||||
return
|
||||
}
|
||||
|
||||
res, err := http.Get(versionFileUrl)
|
||||
if err != nil {
|
||||
logger.Log.Debugf("[ERROR] Failed to get the version file %v", err)
|
||||
versionChan <- ""
|
||||
return
|
||||
}
|
||||
|
||||
@@ -62,6 +66,7 @@ func CheckNewerVersion(versionChan chan string) {
|
||||
res.Body.Close()
|
||||
if err != nil {
|
||||
logger.Log.Debugf("[ERROR] Failed to read the version file -> %v", err)
|
||||
versionChan <- ""
|
||||
return
|
||||
}
|
||||
gitHubVersion := string(data)
|
||||
@@ -72,7 +77,8 @@ func CheckNewerVersion(versionChan chan string) {
|
||||
logger.Log.Debugf("Finished version validation, github version %v, current version %v, took %v", gitHubVersion, currentSemVer, time.Since(start))
|
||||
|
||||
if gitHubVersionSemVer.GreaterThan(currentSemVer) {
|
||||
versionChan <- fmt.Sprintf("Update available! %v -> %v (%v)", mizu.SemVer, gitHubVersion, *latestRelease.HTMLURL)
|
||||
versionChan <- fmt.Sprintf("Update available! %v -> %v (curl -Lo mizu %v/mizu_%s_amd64 && chmod 755 mizu)", mizu.SemVer, gitHubVersion, *latestRelease.HTMLURL, runtime.GOOS)
|
||||
} else {
|
||||
versionChan <- ""
|
||||
}
|
||||
versionChan <- ""
|
||||
}
|
||||
|
||||
@@ -8,4 +8,5 @@ const (
|
||||
MaxEntriesDBSizeBytesEnvVar = "MAX_ENTRIES_DB_BYTES"
|
||||
RulePolicyPath = "/app/enforce-policy/"
|
||||
RulePolicyFileName = "enforce-policy.yaml"
|
||||
GoGCEnvVar = "GOGC"
|
||||
)
|
||||
|
||||
@@ -5,7 +5,7 @@ import (
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
|
||||
yaml "gopkg.in/yaml.v3"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
type WebSocketMessageType string
|
||||
|
||||
198
tap/api/api.go
Normal file
198
tap/api/api.go
Normal file
@@ -0,0 +1,198 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"plugin"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Protocol struct {
|
||||
Name string `json:"name"`
|
||||
LongName string `json:"longName"`
|
||||
Abbreviation string `json:"abbreviation"`
|
||||
Version string `json:"version"`
|
||||
BackgroundColor string `json:"backgroundColor"`
|
||||
ForegroundColor string `json:"foregroundColor"`
|
||||
FontSize int8 `json:"fontSize"`
|
||||
ReferenceLink string `json:"referenceLink"`
|
||||
Ports []string `json:"ports"`
|
||||
Priority uint8 `json:"priority"`
|
||||
}
|
||||
|
||||
type Extension struct {
|
||||
Protocol *Protocol
|
||||
Path string
|
||||
Plug *plugin.Plugin
|
||||
Dissector Dissector
|
||||
MatcherMap *sync.Map
|
||||
}
|
||||
|
||||
type ConnectionInfo struct {
|
||||
ClientIP string
|
||||
ClientPort string
|
||||
ServerIP string
|
||||
ServerPort string
|
||||
IsOutgoing bool
|
||||
}
|
||||
|
||||
type TcpID struct {
|
||||
SrcIP string
|
||||
DstIP string
|
||||
SrcPort string
|
||||
DstPort string
|
||||
Ident string
|
||||
}
|
||||
|
||||
type CounterPair struct {
|
||||
Request uint
|
||||
Response uint
|
||||
}
|
||||
|
||||
type GenericMessage struct {
|
||||
IsRequest bool `json:"isRequest"`
|
||||
CaptureTime time.Time `json:"captureTime"`
|
||||
Payload interface{} `json:"payload"`
|
||||
}
|
||||
|
||||
type RequestResponsePair struct {
|
||||
Request GenericMessage `json:"request"`
|
||||
Response GenericMessage `json:"response"`
|
||||
}
|
||||
|
||||
// `Protocol` is modified in the later stages of data propagation. Therefore it's not a pointer.
|
||||
type OutputChannelItem struct {
|
||||
Protocol Protocol
|
||||
Timestamp int64
|
||||
ConnectionInfo *ConnectionInfo
|
||||
Pair *RequestResponsePair
|
||||
}
|
||||
|
||||
type SuperTimer struct {
|
||||
CaptureTime time.Time
|
||||
}
|
||||
|
||||
type SuperIdentifier struct {
|
||||
Protocol *Protocol
|
||||
IsClosedOthers bool
|
||||
}
|
||||
|
||||
type Dissector interface {
|
||||
Register(*Extension)
|
||||
Ping()
|
||||
Dissect(b *bufio.Reader, isClient bool, tcpID *TcpID, counterPair *CounterPair, superTimer *SuperTimer, superIdentifier *SuperIdentifier, emitter Emitter) error
|
||||
Analyze(item *OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *MizuEntry
|
||||
Summarize(entry *MizuEntry) *BaseEntryDetails
|
||||
Represent(entry *MizuEntry) (protocol Protocol, object []byte, bodySize int64, err error)
|
||||
}
|
||||
|
||||
type Emitting struct {
|
||||
AppStats *AppStats
|
||||
OutputChannel chan *OutputChannelItem
|
||||
}
|
||||
|
||||
type Emitter interface {
|
||||
Emit(item *OutputChannelItem)
|
||||
}
|
||||
|
||||
func (e *Emitting) Emit(item *OutputChannelItem) {
|
||||
e.OutputChannel <- item
|
||||
e.AppStats.IncMatchedPairs()
|
||||
}
|
||||
|
||||
type MizuEntry struct {
|
||||
ID uint `gorm:"primarykey"`
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
ProtocolName string `json:"protocolName" gorm:"column:protocolName"`
|
||||
ProtocolLongName string `json:"protocolLongName" gorm:"column:protocolLongName"`
|
||||
ProtocolAbbreviation string `json:"protocolAbbreviation" gorm:"column:protocolVersion"`
|
||||
ProtocolVersion string `json:"protocolVersion" gorm:"column:protocolVersion"`
|
||||
ProtocolBackgroundColor string `json:"protocolBackgroundColor" gorm:"column:protocolBackgroundColor"`
|
||||
ProtocolForegroundColor string `json:"protocolForegroundColor" gorm:"column:protocolForegroundColor"`
|
||||
ProtocolFontSize int8 `json:"protocolFontSize" gorm:"column:protocolFontSize"`
|
||||
ProtocolReferenceLink string `json:"protocolReferenceLink" gorm:"column:protocolReferenceLink"`
|
||||
Entry string `json:"entry,omitempty" gorm:"column:entry"`
|
||||
EntryId string `json:"entryId" gorm:"column:entryId"`
|
||||
Url string `json:"url" gorm:"column:url"`
|
||||
Method string `json:"method" gorm:"column:method"`
|
||||
Status int `json:"status" gorm:"column:status"`
|
||||
RequestSenderIp string `json:"requestSenderIp" gorm:"column:requestSenderIp"`
|
||||
Service string `json:"service" gorm:"column:service"`
|
||||
Timestamp int64 `json:"timestamp" gorm:"column:timestamp"`
|
||||
ElapsedTime int64 `json:"elapsedTime" gorm:"column:elapsedTime"`
|
||||
Path string `json:"path" gorm:"column:path"`
|
||||
ResolvedSource string `json:"resolvedSource,omitempty" gorm:"column:resolvedSource"`
|
||||
ResolvedDestination string `json:"resolvedDestination,omitempty" gorm:"column:resolvedDestination"`
|
||||
SourceIp string `json:"sourceIp,omitempty" gorm:"column:sourceIp"`
|
||||
DestinationIp string `json:"destinationIp,omitempty" gorm:"column:destinationIp"`
|
||||
SourcePort string `json:"sourcePort,omitempty" gorm:"column:sourcePort"`
|
||||
DestinationPort string `json:"destinationPort,omitempty" gorm:"column:destinationPort"`
|
||||
IsOutgoing bool `json:"isOutgoing,omitempty" gorm:"column:isOutgoing"`
|
||||
EstimatedSizeBytes int `json:"-" gorm:"column:estimatedSizeBytes"`
|
||||
}
|
||||
|
||||
type MizuEntryWrapper struct {
|
||||
Protocol Protocol `json:"protocol"`
|
||||
Representation string `json:"representation"`
|
||||
BodySize int64 `json:"bodySize"`
|
||||
Data MizuEntry `json:"data"`
|
||||
}
|
||||
|
||||
type BaseEntryDetails struct {
|
||||
Id string `json:"id,omitempty"`
|
||||
Protocol Protocol `json:"protocol,omitempty"`
|
||||
Url string `json:"url,omitempty"`
|
||||
RequestSenderIp string `json:"requestSenderIp,omitempty"`
|
||||
Service string `json:"service,omitempty"`
|
||||
Path string `json:"path,omitempty"`
|
||||
Summary string `json:"summary,omitempty"`
|
||||
StatusCode int `json:"statusCode"`
|
||||
Method string `json:"method,omitempty"`
|
||||
Timestamp int64 `json:"timestamp,omitempty"`
|
||||
SourceIp string `json:"sourceIp,omitempty"`
|
||||
DestinationIp string `json:"destinationIp,omitempty"`
|
||||
SourcePort string `json:"sourcePort,omitempty"`
|
||||
DestinationPort string `json:"destinationPort,omitempty"`
|
||||
IsOutgoing bool `json:"isOutgoing,omitempty"`
|
||||
Latency int64 `json:"latency,omitempty"`
|
||||
Rules ApplicableRules `json:"rules,omitempty"`
|
||||
}
|
||||
|
||||
type ApplicableRules struct {
|
||||
Latency int64 `json:"latency,omitempty"`
|
||||
Status bool `json:"status,omitempty"`
|
||||
NumberOfRules int `json:"numberOfRules,omitempty"`
|
||||
}
|
||||
|
||||
type DataUnmarshaler interface {
|
||||
UnmarshalData(*MizuEntry) error
|
||||
}
|
||||
|
||||
func (bed *BaseEntryDetails) UnmarshalData(entry *MizuEntry) error {
|
||||
bed.Protocol = Protocol{
|
||||
Name: entry.ProtocolName,
|
||||
LongName: entry.ProtocolLongName,
|
||||
Abbreviation: entry.ProtocolAbbreviation,
|
||||
Version: entry.ProtocolVersion,
|
||||
BackgroundColor: entry.ProtocolBackgroundColor,
|
||||
ForegroundColor: entry.ProtocolForegroundColor,
|
||||
FontSize: entry.ProtocolFontSize,
|
||||
ReferenceLink: entry.ProtocolReferenceLink,
|
||||
}
|
||||
bed.Id = entry.EntryId
|
||||
bed.Url = entry.Url
|
||||
bed.Service = entry.Service
|
||||
bed.Summary = entry.Path
|
||||
bed.StatusCode = entry.Status
|
||||
bed.Method = entry.Method
|
||||
bed.Timestamp = entry.Timestamp
|
||||
bed.RequestSenderIp = entry.RequestSenderIp
|
||||
bed.IsOutgoing = entry.IsOutgoing
|
||||
return nil
|
||||
}
|
||||
|
||||
const (
|
||||
TABLE string = "table"
|
||||
BODY string = "body"
|
||||
)
|
||||
3
tap/api/go.mod
Normal file
3
tap/api/go.mod
Normal file
@@ -0,0 +1,3 @@
|
||||
module github.com/up9inc/mizu/tap/api
|
||||
|
||||
go 1.16
|
||||
70
tap/api/stats_tracker.go
Normal file
70
tap/api/stats_tracker.go
Normal file
@@ -0,0 +1,70 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type AppStats struct {
|
||||
StartTime time.Time `json:"-"`
|
||||
ProcessedBytes uint64 `json:"processedBytes"`
|
||||
PacketsCount uint64 `json:"packetsCount"`
|
||||
TcpPacketsCount uint64 `json:"tcpPacketsCount"`
|
||||
ReassembledTcpPayloadsCount uint64 `json:"reassembledTcpPayloadsCount"`
|
||||
TlsConnectionsCount uint64 `json:"tlsConnectionsCount"`
|
||||
MatchedPairs uint64 `json:"matchedPairs"`
|
||||
DroppedTcpStreams uint64 `json:"droppedTcpStreams"`
|
||||
}
|
||||
|
||||
func (as *AppStats) IncMatchedPairs() {
|
||||
atomic.AddUint64(&as.MatchedPairs, 1)
|
||||
}
|
||||
|
||||
func (as *AppStats) IncDroppedTcpStreams() {
|
||||
atomic.AddUint64(&as.DroppedTcpStreams, 1)
|
||||
}
|
||||
|
||||
func (as *AppStats) IncPacketsCount() uint64 {
|
||||
atomic.AddUint64(&as.PacketsCount, 1)
|
||||
return as.PacketsCount
|
||||
}
|
||||
|
||||
func (as *AppStats) IncTcpPacketsCount() {
|
||||
atomic.AddUint64(&as.TcpPacketsCount, 1)
|
||||
}
|
||||
|
||||
func (as *AppStats) IncReassembledTcpPayloadsCount() {
|
||||
atomic.AddUint64(&as.ReassembledTcpPayloadsCount, 1)
|
||||
}
|
||||
|
||||
func (as *AppStats) IncTlsConnectionsCount() {
|
||||
atomic.AddUint64(&as.TlsConnectionsCount, 1)
|
||||
}
|
||||
|
||||
func (as *AppStats) UpdateProcessedBytes(size uint64) {
|
||||
atomic.AddUint64(&as.ProcessedBytes, size)
|
||||
}
|
||||
|
||||
func (as *AppStats) SetStartTime(startTime time.Time) {
|
||||
as.StartTime = startTime
|
||||
}
|
||||
|
||||
func (as *AppStats) DumpStats() *AppStats {
|
||||
currentAppStats := &AppStats{StartTime: as.StartTime}
|
||||
|
||||
currentAppStats.ProcessedBytes = resetUint64(&as.ProcessedBytes)
|
||||
currentAppStats.PacketsCount = resetUint64(&as.PacketsCount)
|
||||
currentAppStats.TcpPacketsCount = resetUint64(&as.TcpPacketsCount)
|
||||
currentAppStats.ReassembledTcpPayloadsCount = resetUint64(&as.ReassembledTcpPayloadsCount)
|
||||
currentAppStats.TlsConnectionsCount = resetUint64(&as.TlsConnectionsCount)
|
||||
currentAppStats.MatchedPairs = resetUint64(&as.MatchedPairs)
|
||||
currentAppStats.DroppedTcpStreams = resetUint64(&as.DroppedTcpStreams)
|
||||
|
||||
return currentAppStats
|
||||
}
|
||||
|
||||
func resetUint64(ref *uint64) (val uint64) {
|
||||
val = atomic.LoadUint64(ref)
|
||||
atomic.StoreUint64(ref, 0)
|
||||
return
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
package tap
|
||||
|
||||
import (
|
||||
"github.com/romana/rlog"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/google/gopacket/reassembly"
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
type CleanerStats struct {
|
||||
@@ -17,7 +18,6 @@ type CleanerStats struct {
|
||||
type Cleaner struct {
|
||||
assembler *reassembly.Assembler
|
||||
assemblerMutex *sync.Mutex
|
||||
matcher *requestResponseMatcher
|
||||
cleanPeriod time.Duration
|
||||
connectionTimeout time.Duration
|
||||
stats CleanerStats
|
||||
@@ -32,13 +32,15 @@ func (cl *Cleaner) clean() {
|
||||
flushed, closed := cl.assembler.FlushCloseOlderThan(startCleanTime.Add(-cl.connectionTimeout))
|
||||
cl.assemblerMutex.Unlock()
|
||||
|
||||
deleted := cl.matcher.deleteOlderThan(startCleanTime.Add(-cl.connectionTimeout))
|
||||
for _, extension := range extensions {
|
||||
deleted := deleteOlderThan(extension.MatcherMap, startCleanTime.Add(-cl.connectionTimeout))
|
||||
cl.stats.deleted += deleted
|
||||
}
|
||||
|
||||
cl.statsMutex.Lock()
|
||||
rlog.Debugf("Assembler Stats after cleaning %s", cl.assembler.Dump())
|
||||
cl.stats.flushed += flushed
|
||||
cl.stats.closed += closed
|
||||
cl.stats.deleted += deleted
|
||||
cl.statsMutex.Unlock()
|
||||
}
|
||||
|
||||
@@ -70,3 +72,25 @@ func (cl *Cleaner) dumpStats() CleanerStats {
|
||||
|
||||
return stats
|
||||
}
|
||||
|
||||
func deleteOlderThan(matcherMap *sync.Map, t time.Time) int {
|
||||
numDeleted := 0
|
||||
|
||||
if matcherMap == nil {
|
||||
return numDeleted
|
||||
}
|
||||
|
||||
matcherMap.Range(func(key interface{}, value interface{}) bool {
|
||||
message, _ := value.(*api.GenericMessage)
|
||||
// TODO: Investigate the reason why `request` is `nil` in some rare occasion
|
||||
if message != nil {
|
||||
if message.CaptureTime.Before(t) {
|
||||
matcherMap.Delete(key)
|
||||
numDeleted++
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
return numDeleted
|
||||
}
|
||||
|
||||
9
tap/extensions/amqp/go.mod
Normal file
9
tap/extensions/amqp/go.mod
Normal file
@@ -0,0 +1,9 @@
|
||||
module github.com/up9inc/mizu/tap/extensions/amqp
|
||||
|
||||
go 1.16
|
||||
|
||||
require (
|
||||
github.com/up9inc/mizu/tap/api v0.0.0
|
||||
)
|
||||
|
||||
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api
|
||||
664
tap/extensions/amqp/helpers.go
Normal file
664
tap/extensions/amqp/helpers.go
Normal file
@@ -0,0 +1,664 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
var connectionMethodMap = map[int]string{
|
||||
10: "connection start",
|
||||
11: "connection start-ok",
|
||||
20: "connection secure",
|
||||
21: "connection secure-ok",
|
||||
30: "connection tune",
|
||||
31: "connection tune-ok",
|
||||
40: "connection open",
|
||||
41: "connection open-ok",
|
||||
50: "connection close",
|
||||
51: "connection close-ok",
|
||||
60: "connection blocked",
|
||||
61: "connection unblocked",
|
||||
}
|
||||
|
||||
var channelMethodMap = map[int]string{
|
||||
10: "channel open",
|
||||
11: "channel open-ok",
|
||||
20: "channel flow",
|
||||
21: "channel flow-ok",
|
||||
40: "channel close",
|
||||
41: "channel close-ok",
|
||||
}
|
||||
|
||||
var exchangeMethodMap = map[int]string{
|
||||
10: "exchange declare",
|
||||
11: "exchange declare-ok",
|
||||
20: "exchange delete",
|
||||
21: "exchange delete-ok",
|
||||
30: "exchange bind",
|
||||
31: "exchange bind-ok",
|
||||
40: "exchange unbind",
|
||||
51: "exchange unbind-ok",
|
||||
}
|
||||
|
||||
var queueMethodMap = map[int]string{
|
||||
10: "queue declare",
|
||||
11: "queue declare-ok",
|
||||
20: "queue bind",
|
||||
21: "queue bind-ok",
|
||||
50: "queue unbind",
|
||||
51: "queue unbind-ok",
|
||||
30: "queue purge",
|
||||
31: "queue purge-ok",
|
||||
40: "queue delete",
|
||||
41: "queue delete-ok",
|
||||
}
|
||||
|
||||
var basicMethodMap = map[int]string{
|
||||
10: "basic qos",
|
||||
11: "basic qos-ok",
|
||||
20: "basic consume",
|
||||
21: "basic consume-ok",
|
||||
30: "basic cancel",
|
||||
31: "basic cancel-ok",
|
||||
40: "basic publish",
|
||||
50: "basic return",
|
||||
60: "basic deliver",
|
||||
70: "basic get",
|
||||
71: "basic get-ok",
|
||||
72: "basic get-empty",
|
||||
80: "basic ack",
|
||||
90: "basic reject",
|
||||
100: "basic recover-async",
|
||||
110: "basic recover",
|
||||
111: "basic recover-ok",
|
||||
120: "basic nack",
|
||||
}
|
||||
|
||||
var txMethodMap = map[int]string{
|
||||
10: "tx select",
|
||||
11: "tx select-ok",
|
||||
20: "tx commit",
|
||||
21: "tx commit-ok",
|
||||
30: "tx rollback",
|
||||
31: "tx rollback-ok",
|
||||
}
|
||||
|
||||
type AMQPWrapper struct {
|
||||
Method string `json:"method"`
|
||||
Url string `json:"url"`
|
||||
Details interface{} `json:"details"`
|
||||
}
|
||||
|
||||
func emitAMQP(event interface{}, _type string, method string, connectionInfo *api.ConnectionInfo, captureTime time.Time, emitter api.Emitter) {
|
||||
request := &api.GenericMessage{
|
||||
IsRequest: true,
|
||||
CaptureTime: captureTime,
|
||||
Payload: AMQPPayload{
|
||||
Data: &AMQPWrapper{
|
||||
Method: method,
|
||||
Url: "",
|
||||
Details: event,
|
||||
},
|
||||
},
|
||||
}
|
||||
item := &api.OutputChannelItem{
|
||||
Protocol: protocol,
|
||||
Timestamp: captureTime.UnixNano() / int64(time.Millisecond),
|
||||
ConnectionInfo: connectionInfo,
|
||||
Pair: &api.RequestResponsePair{
|
||||
Request: *request,
|
||||
Response: api.GenericMessage{},
|
||||
},
|
||||
}
|
||||
emitter.Emit(item)
|
||||
}
|
||||
|
||||
func representProperties(properties map[string]interface{}, rep []interface{}) ([]interface{}, string, string) {
|
||||
contentType := ""
|
||||
contentEncoding := ""
|
||||
deliveryMode := ""
|
||||
priority := ""
|
||||
correlationId := ""
|
||||
replyTo := ""
|
||||
expiration := ""
|
||||
messageId := ""
|
||||
timestamp := ""
|
||||
_type := ""
|
||||
userId := ""
|
||||
appId := ""
|
||||
|
||||
if properties["ContentType"] != nil {
|
||||
contentType = properties["ContentType"].(string)
|
||||
}
|
||||
if properties["ContentEncoding"] != nil {
|
||||
contentEncoding = properties["ContentEncoding"].(string)
|
||||
}
|
||||
if properties["Delivery Mode"] != nil {
|
||||
deliveryMode = fmt.Sprintf("%g", properties["DeliveryMode"].(float64))
|
||||
}
|
||||
if properties["Priority"] != nil {
|
||||
priority = fmt.Sprintf("%g", properties["Priority"].(float64))
|
||||
}
|
||||
if properties["CorrelationId"] != nil {
|
||||
correlationId = properties["CorrelationId"].(string)
|
||||
}
|
||||
if properties["ReplyTo"] != nil {
|
||||
replyTo = properties["ReplyTo"].(string)
|
||||
}
|
||||
if properties["Expiration"] != nil {
|
||||
expiration = properties["Expiration"].(string)
|
||||
}
|
||||
if properties["MessageId"] != nil {
|
||||
messageId = properties["MessageId"].(string)
|
||||
}
|
||||
if properties["Timestamp"] != nil {
|
||||
timestamp = properties["Timestamp"].(string)
|
||||
}
|
||||
if properties["Type"] != nil {
|
||||
_type = properties["Type"].(string)
|
||||
}
|
||||
if properties["UserId"] != nil {
|
||||
userId = properties["UserId"].(string)
|
||||
}
|
||||
if properties["AppId"] != nil {
|
||||
appId = properties["AppId"].(string)
|
||||
}
|
||||
|
||||
props, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Content Type",
|
||||
"value": contentType,
|
||||
},
|
||||
{
|
||||
"name": "Content Encoding",
|
||||
"value": contentEncoding,
|
||||
},
|
||||
{
|
||||
"name": "Delivery Mode",
|
||||
"value": deliveryMode,
|
||||
},
|
||||
{
|
||||
"name": "Priority",
|
||||
"value": priority,
|
||||
},
|
||||
{
|
||||
"name": "Correlation ID",
|
||||
"value": correlationId,
|
||||
},
|
||||
{
|
||||
"name": "Reply To",
|
||||
"value": replyTo,
|
||||
},
|
||||
{
|
||||
"name": "Expiration",
|
||||
"value": expiration,
|
||||
},
|
||||
{
|
||||
"name": "Message ID",
|
||||
"value": messageId,
|
||||
},
|
||||
{
|
||||
"name": "Timestamp",
|
||||
"value": timestamp,
|
||||
},
|
||||
{
|
||||
"name": "Type",
|
||||
"value": _type,
|
||||
},
|
||||
{
|
||||
"name": "User ID",
|
||||
"value": userId,
|
||||
},
|
||||
{
|
||||
"name": "App ID",
|
||||
"value": appId,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Properties",
|
||||
"data": string(props),
|
||||
})
|
||||
|
||||
return rep, contentType, contentEncoding
|
||||
}
|
||||
|
||||
func representBasicPublish(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Exchange",
|
||||
"value": event["Exchange"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Routing Key",
|
||||
"value": event["RoutingKey"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Mandatory",
|
||||
"value": strconv.FormatBool(event["Mandatory"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Immediate",
|
||||
"value": strconv.FormatBool(event["Immediate"].(bool)),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
properties := event["Properties"].(map[string]interface{})
|
||||
rep, contentType, _ := representProperties(properties, rep)
|
||||
|
||||
if properties["Headers"] != nil {
|
||||
headers := make([]map[string]string, 0)
|
||||
for name, value := range properties["Headers"].(map[string]interface{}) {
|
||||
headers = append(headers, map[string]string{
|
||||
"name": name,
|
||||
"value": value.(string),
|
||||
})
|
||||
}
|
||||
headersMarshaled, _ := json.Marshal(headers)
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Headers",
|
||||
"data": string(headersMarshaled),
|
||||
})
|
||||
}
|
||||
|
||||
if event["Body"] != nil {
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.BODY,
|
||||
"title": "Body",
|
||||
"encoding": "base64",
|
||||
"mime_type": contentType,
|
||||
"data": event["Body"].(string),
|
||||
})
|
||||
}
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representBasicDeliver(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
consumerTag := ""
|
||||
deliveryTag := ""
|
||||
redelivered := ""
|
||||
|
||||
if event["ConsumerTag"] != nil {
|
||||
consumerTag = event["ConsumerTag"].(string)
|
||||
}
|
||||
if event["DeliveryTag"] != nil {
|
||||
deliveryTag = fmt.Sprintf("%g", event["DeliveryTag"].(float64))
|
||||
}
|
||||
if event["Redelivered"] != nil {
|
||||
redelivered = strconv.FormatBool(event["Redelivered"].(bool))
|
||||
}
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Consumer Tag",
|
||||
"value": consumerTag,
|
||||
},
|
||||
{
|
||||
"name": "Delivery Tag",
|
||||
"value": deliveryTag,
|
||||
},
|
||||
{
|
||||
"name": "Redelivered",
|
||||
"value": redelivered,
|
||||
},
|
||||
{
|
||||
"name": "Exchange",
|
||||
"value": event["Exchange"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Routing Key",
|
||||
"value": event["RoutingKey"].(string),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
properties := event["Properties"].(map[string]interface{})
|
||||
rep, contentType, _ := representProperties(properties, rep)
|
||||
|
||||
if properties["Headers"] != nil {
|
||||
headers := make([]map[string]string, 0)
|
||||
for name, value := range properties["Headers"].(map[string]interface{}) {
|
||||
headers = append(headers, map[string]string{
|
||||
"name": name,
|
||||
"value": value.(string),
|
||||
})
|
||||
}
|
||||
headersMarshaled, _ := json.Marshal(headers)
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Headers",
|
||||
"data": string(headersMarshaled),
|
||||
})
|
||||
}
|
||||
|
||||
if event["Body"] != nil {
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.BODY,
|
||||
"title": "Body",
|
||||
"encoding": "base64",
|
||||
"mime_type": contentType,
|
||||
"data": event["Body"].(string),
|
||||
})
|
||||
}
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representQueueDeclare(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Queue",
|
||||
"value": event["Queue"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Passive",
|
||||
"value": strconv.FormatBool(event["Passive"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Durable",
|
||||
"value": strconv.FormatBool(event["Durable"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Exclusive",
|
||||
"value": strconv.FormatBool(event["Exclusive"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Auto Delete",
|
||||
"value": strconv.FormatBool(event["AutoDelete"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "NoWait",
|
||||
"value": strconv.FormatBool(event["NoWait"].(bool)),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
if event["Arguments"] != nil {
|
||||
headers := make([]map[string]string, 0)
|
||||
for name, value := range event["Arguments"].(map[string]interface{}) {
|
||||
headers = append(headers, map[string]string{
|
||||
"name": name,
|
||||
"value": value.(string),
|
||||
})
|
||||
}
|
||||
headersMarshaled, _ := json.Marshal(headers)
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Arguments",
|
||||
"data": string(headersMarshaled),
|
||||
})
|
||||
}
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representExchangeDeclare(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Exchange",
|
||||
"value": event["Exchange"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Type",
|
||||
"value": event["Type"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Passive",
|
||||
"value": strconv.FormatBool(event["Passive"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Durable",
|
||||
"value": strconv.FormatBool(event["Durable"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Auto Delete",
|
||||
"value": strconv.FormatBool(event["AutoDelete"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Internal",
|
||||
"value": strconv.FormatBool(event["Internal"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "NoWait",
|
||||
"value": strconv.FormatBool(event["NoWait"].(bool)),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
if event["Arguments"] != nil {
|
||||
headers := make([]map[string]string, 0)
|
||||
for name, value := range event["Arguments"].(map[string]interface{}) {
|
||||
headers = append(headers, map[string]string{
|
||||
"name": name,
|
||||
"value": value.(string),
|
||||
})
|
||||
}
|
||||
headersMarshaled, _ := json.Marshal(headers)
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Arguments",
|
||||
"data": string(headersMarshaled),
|
||||
})
|
||||
}
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representConnectionStart(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Version Major",
|
||||
"value": fmt.Sprintf("%g", event["VersionMajor"].(float64)),
|
||||
},
|
||||
{
|
||||
"name": "Version Minor",
|
||||
"value": fmt.Sprintf("%g", event["VersionMinor"].(float64)),
|
||||
},
|
||||
{
|
||||
"name": "Mechanisms",
|
||||
"value": event["Mechanisms"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Locales",
|
||||
"value": event["Locales"].(string),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
if event["ServerProperties"] != nil {
|
||||
headers := make([]map[string]string, 0)
|
||||
for name, value := range event["ServerProperties"].(map[string]interface{}) {
|
||||
var outcome string
|
||||
switch value.(type) {
|
||||
case string:
|
||||
outcome = value.(string)
|
||||
break
|
||||
case map[string]interface{}:
|
||||
x, _ := json.Marshal(value)
|
||||
outcome = string(x)
|
||||
break
|
||||
default:
|
||||
panic("Unknown data type for the server property!")
|
||||
}
|
||||
headers = append(headers, map[string]string{
|
||||
"name": name,
|
||||
"value": outcome,
|
||||
})
|
||||
}
|
||||
headersMarshaled, _ := json.Marshal(headers)
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Server Properties",
|
||||
"data": string(headersMarshaled),
|
||||
})
|
||||
}
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representConnectionClose(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Reply Code",
|
||||
"value": fmt.Sprintf("%g", event["ReplyCode"].(float64)),
|
||||
},
|
||||
{
|
||||
"name": "Reply Text",
|
||||
"value": event["ReplyText"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Class ID",
|
||||
"value": fmt.Sprintf("%g", event["ClassId"].(float64)),
|
||||
},
|
||||
{
|
||||
"name": "Method ID",
|
||||
"value": fmt.Sprintf("%g", event["MethodId"].(float64)),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representQueueBind(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Queue",
|
||||
"value": event["Queue"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Exchange",
|
||||
"value": event["Exchange"].(string),
|
||||
},
|
||||
{
|
||||
"name": "RoutingKey",
|
||||
"value": event["RoutingKey"].(string),
|
||||
},
|
||||
{
|
||||
"name": "NoWait",
|
||||
"value": strconv.FormatBool(event["NoWait"].(bool)),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
if event["Arguments"] != nil {
|
||||
headers := make([]map[string]string, 0)
|
||||
for name, value := range event["Arguments"].(map[string]interface{}) {
|
||||
headers = append(headers, map[string]string{
|
||||
"name": name,
|
||||
"value": value.(string),
|
||||
})
|
||||
}
|
||||
headersMarshaled, _ := json.Marshal(headers)
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Arguments",
|
||||
"data": string(headersMarshaled),
|
||||
})
|
||||
}
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representBasicConsume(event map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Queue",
|
||||
"value": event["Queue"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Consumer Tag",
|
||||
"value": event["ConsumerTag"].(string),
|
||||
},
|
||||
{
|
||||
"name": "No Local",
|
||||
"value": strconv.FormatBool(event["NoLocal"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "No Ack",
|
||||
"value": strconv.FormatBool(event["NoAck"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "Exclusive",
|
||||
"value": strconv.FormatBool(event["Exclusive"].(bool)),
|
||||
},
|
||||
{
|
||||
"name": "NoWait",
|
||||
"value": strconv.FormatBool(event["NoWait"].(bool)),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
if event["Arguments"] != nil {
|
||||
headers := make([]map[string]string, 0)
|
||||
for name, value := range event["Arguments"].(map[string]interface{}) {
|
||||
headers = append(headers, map[string]string{
|
||||
"name": name,
|
||||
"value": value.(string),
|
||||
})
|
||||
}
|
||||
headersMarshaled, _ := json.Marshal(headers)
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Arguments",
|
||||
"data": string(headersMarshaled),
|
||||
})
|
||||
}
|
||||
|
||||
return rep
|
||||
}
|
||||
363
tap/extensions/amqp/main.go
Normal file
363
tap/extensions/amqp/main.go
Normal file
@@ -0,0 +1,363 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"strconv"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
var protocol api.Protocol = api.Protocol{
|
||||
Name: "amqp",
|
||||
LongName: "Advanced Message Queuing Protocol 0-9-1",
|
||||
Abbreviation: "AMQP",
|
||||
Version: "0-9-1",
|
||||
BackgroundColor: "#ff6600",
|
||||
ForegroundColor: "#ffffff",
|
||||
FontSize: 12,
|
||||
ReferenceLink: "https://www.rabbitmq.com/amqp-0-9-1-reference.html",
|
||||
Ports: []string{"5671", "5672"},
|
||||
Priority: 1,
|
||||
}
|
||||
|
||||
func init() {
|
||||
log.Println("Initializing AMQP extension...")
|
||||
}
|
||||
|
||||
type dissecting string
|
||||
|
||||
func (d dissecting) Register(extension *api.Extension) {
|
||||
extension.Protocol = &protocol
|
||||
}
|
||||
|
||||
func (d dissecting) Ping() {
|
||||
log.Printf("pong %s\n", protocol.Name)
|
||||
}
|
||||
|
||||
const amqpRequest string = "amqp_request"
|
||||
|
||||
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter) error {
|
||||
r := AmqpReader{b}
|
||||
|
||||
var remaining int
|
||||
var header *HeaderFrame
|
||||
var body []byte
|
||||
|
||||
connectionInfo := &api.ConnectionInfo{
|
||||
ClientIP: tcpID.SrcIP,
|
||||
ClientPort: tcpID.SrcPort,
|
||||
ServerIP: tcpID.DstIP,
|
||||
ServerPort: tcpID.DstPort,
|
||||
IsOutgoing: true,
|
||||
}
|
||||
|
||||
eventBasicPublish := &BasicPublish{
|
||||
Exchange: "",
|
||||
RoutingKey: "",
|
||||
Mandatory: false,
|
||||
Immediate: false,
|
||||
Body: nil,
|
||||
Properties: Properties{},
|
||||
}
|
||||
|
||||
eventBasicDeliver := &BasicDeliver{
|
||||
ConsumerTag: "",
|
||||
DeliveryTag: 0,
|
||||
Redelivered: false,
|
||||
Exchange: "",
|
||||
RoutingKey: "",
|
||||
Properties: Properties{},
|
||||
Body: nil,
|
||||
}
|
||||
|
||||
var lastMethodFrameMessage Message
|
||||
|
||||
for {
|
||||
if superIdentifier.Protocol != nil && superIdentifier.Protocol != &protocol {
|
||||
return errors.New("Identified by another protocol")
|
||||
}
|
||||
|
||||
frame, err := r.ReadFrame()
|
||||
if err == io.EOF {
|
||||
// We must read until we see an EOF... very important!
|
||||
return errors.New("AMQP EOF")
|
||||
}
|
||||
|
||||
switch f := frame.(type) {
|
||||
case *HeartbeatFrame:
|
||||
// drop
|
||||
|
||||
case *HeaderFrame:
|
||||
// start content state
|
||||
header = f
|
||||
remaining = int(header.Size)
|
||||
switch lastMethodFrameMessage.(type) {
|
||||
case *BasicPublish:
|
||||
eventBasicPublish.Properties = header.Properties
|
||||
case *BasicDeliver:
|
||||
eventBasicDeliver.Properties = header.Properties
|
||||
default:
|
||||
frame = nil
|
||||
}
|
||||
|
||||
case *BodyFrame:
|
||||
// continue until terminated
|
||||
body = append(body, f.Body...)
|
||||
remaining -= len(f.Body)
|
||||
switch lastMethodFrameMessage.(type) {
|
||||
case *BasicPublish:
|
||||
eventBasicPublish.Body = f.Body
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventBasicPublish, amqpRequest, basicMethodMap[40], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
case *BasicDeliver:
|
||||
eventBasicDeliver.Body = f.Body
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventBasicDeliver, amqpRequest, basicMethodMap[60], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
default:
|
||||
body = nil
|
||||
frame = nil
|
||||
}
|
||||
|
||||
case *MethodFrame:
|
||||
lastMethodFrameMessage = f.Method
|
||||
switch m := f.Method.(type) {
|
||||
case *BasicPublish:
|
||||
eventBasicPublish.Exchange = m.Exchange
|
||||
eventBasicPublish.RoutingKey = m.RoutingKey
|
||||
eventBasicPublish.Mandatory = m.Mandatory
|
||||
eventBasicPublish.Immediate = m.Immediate
|
||||
|
||||
case *QueueBind:
|
||||
eventQueueBind := &QueueBind{
|
||||
Queue: m.Queue,
|
||||
Exchange: m.Exchange,
|
||||
RoutingKey: m.RoutingKey,
|
||||
NoWait: m.NoWait,
|
||||
Arguments: m.Arguments,
|
||||
}
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventQueueBind, amqpRequest, queueMethodMap[20], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
|
||||
case *BasicConsume:
|
||||
eventBasicConsume := &BasicConsume{
|
||||
Queue: m.Queue,
|
||||
ConsumerTag: m.ConsumerTag,
|
||||
NoLocal: m.NoLocal,
|
||||
NoAck: m.NoAck,
|
||||
Exclusive: m.Exclusive,
|
||||
NoWait: m.NoWait,
|
||||
Arguments: m.Arguments,
|
||||
}
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventBasicConsume, amqpRequest, basicMethodMap[20], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
|
||||
case *BasicDeliver:
|
||||
eventBasicDeliver.ConsumerTag = m.ConsumerTag
|
||||
eventBasicDeliver.DeliveryTag = m.DeliveryTag
|
||||
eventBasicDeliver.Redelivered = m.Redelivered
|
||||
eventBasicDeliver.Exchange = m.Exchange
|
||||
eventBasicDeliver.RoutingKey = m.RoutingKey
|
||||
|
||||
case *QueueDeclare:
|
||||
eventQueueDeclare := &QueueDeclare{
|
||||
Queue: m.Queue,
|
||||
Passive: m.Passive,
|
||||
Durable: m.Durable,
|
||||
AutoDelete: m.AutoDelete,
|
||||
Exclusive: m.Exclusive,
|
||||
NoWait: m.NoWait,
|
||||
Arguments: m.Arguments,
|
||||
}
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventQueueDeclare, amqpRequest, queueMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
|
||||
case *ExchangeDeclare:
|
||||
eventExchangeDeclare := &ExchangeDeclare{
|
||||
Exchange: m.Exchange,
|
||||
Type: m.Type,
|
||||
Passive: m.Passive,
|
||||
Durable: m.Durable,
|
||||
AutoDelete: m.AutoDelete,
|
||||
Internal: m.Internal,
|
||||
NoWait: m.NoWait,
|
||||
Arguments: m.Arguments,
|
||||
}
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventExchangeDeclare, amqpRequest, exchangeMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
|
||||
case *ConnectionStart:
|
||||
eventConnectionStart := &ConnectionStart{
|
||||
VersionMajor: m.VersionMajor,
|
||||
VersionMinor: m.VersionMinor,
|
||||
ServerProperties: m.ServerProperties,
|
||||
Mechanisms: m.Mechanisms,
|
||||
Locales: m.Locales,
|
||||
}
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventConnectionStart, amqpRequest, connectionMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
|
||||
case *ConnectionClose:
|
||||
eventConnectionClose := &ConnectionClose{
|
||||
ReplyCode: m.ReplyCode,
|
||||
ReplyText: m.ReplyText,
|
||||
ClassId: m.ClassId,
|
||||
MethodId: m.MethodId,
|
||||
}
|
||||
superIdentifier.Protocol = &protocol
|
||||
emitAMQP(*eventConnectionClose, amqpRequest, connectionMethodMap[50], connectionInfo, superTimer.CaptureTime, emitter)
|
||||
|
||||
default:
|
||||
frame = nil
|
||||
|
||||
}
|
||||
|
||||
default:
|
||||
// log.Printf("unexpected frame: %+v\n", f)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
|
||||
request := item.Pair.Request.Payload.(map[string]interface{})
|
||||
reqDetails := request["details"].(map[string]interface{})
|
||||
service := "amqp"
|
||||
if resolvedDestination != "" {
|
||||
service = resolvedDestination
|
||||
} else if resolvedSource != "" {
|
||||
service = resolvedSource
|
||||
}
|
||||
|
||||
summary := ""
|
||||
switch request["method"] {
|
||||
case basicMethodMap[40]:
|
||||
summary = reqDetails["Exchange"].(string)
|
||||
break
|
||||
case basicMethodMap[60]:
|
||||
summary = reqDetails["Exchange"].(string)
|
||||
break
|
||||
case exchangeMethodMap[10]:
|
||||
summary = reqDetails["Exchange"].(string)
|
||||
break
|
||||
case queueMethodMap[10]:
|
||||
summary = reqDetails["Queue"].(string)
|
||||
break
|
||||
case connectionMethodMap[10]:
|
||||
summary = fmt.Sprintf(
|
||||
"%s.%s",
|
||||
strconv.Itoa(int(reqDetails["VersionMajor"].(float64))),
|
||||
strconv.Itoa(int(reqDetails["VersionMinor"].(float64))),
|
||||
)
|
||||
break
|
||||
case connectionMethodMap[50]:
|
||||
summary = reqDetails["ReplyText"].(string)
|
||||
break
|
||||
case queueMethodMap[20]:
|
||||
summary = reqDetails["Queue"].(string)
|
||||
break
|
||||
case basicMethodMap[20]:
|
||||
summary = reqDetails["Queue"].(string)
|
||||
break
|
||||
}
|
||||
|
||||
request["url"] = summary
|
||||
entryBytes, _ := json.Marshal(item.Pair)
|
||||
return &api.MizuEntry{
|
||||
ProtocolName: protocol.Name,
|
||||
ProtocolLongName: protocol.LongName,
|
||||
ProtocolAbbreviation: protocol.Abbreviation,
|
||||
ProtocolVersion: protocol.Version,
|
||||
ProtocolBackgroundColor: protocol.BackgroundColor,
|
||||
ProtocolForegroundColor: protocol.ForegroundColor,
|
||||
ProtocolFontSize: protocol.FontSize,
|
||||
ProtocolReferenceLink: protocol.ReferenceLink,
|
||||
EntryId: entryId,
|
||||
Entry: string(entryBytes),
|
||||
Url: fmt.Sprintf("%s%s", service, summary),
|
||||
Method: request["method"].(string),
|
||||
Status: 0,
|
||||
RequestSenderIp: item.ConnectionInfo.ClientIP,
|
||||
Service: service,
|
||||
Timestamp: item.Timestamp,
|
||||
ElapsedTime: 0,
|
||||
Path: summary,
|
||||
ResolvedSource: resolvedSource,
|
||||
ResolvedDestination: resolvedDestination,
|
||||
SourceIp: item.ConnectionInfo.ClientIP,
|
||||
DestinationIp: item.ConnectionInfo.ServerIP,
|
||||
SourcePort: item.ConnectionInfo.ClientPort,
|
||||
DestinationPort: item.ConnectionInfo.ServerPort,
|
||||
IsOutgoing: item.ConnectionInfo.IsOutgoing,
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
|
||||
return &api.BaseEntryDetails{
|
||||
Id: entry.EntryId,
|
||||
Protocol: protocol,
|
||||
Url: entry.Url,
|
||||
RequestSenderIp: entry.RequestSenderIp,
|
||||
Service: entry.Service,
|
||||
Summary: entry.Path,
|
||||
StatusCode: entry.Status,
|
||||
Method: entry.Method,
|
||||
Timestamp: entry.Timestamp,
|
||||
SourceIp: entry.SourceIp,
|
||||
DestinationIp: entry.DestinationIp,
|
||||
SourcePort: entry.SourcePort,
|
||||
DestinationPort: entry.DestinationPort,
|
||||
IsOutgoing: entry.IsOutgoing,
|
||||
Latency: 0,
|
||||
Rules: api.ApplicableRules{
|
||||
Latency: 0,
|
||||
Status: false,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
|
||||
p = protocol
|
||||
bodySize = 0
|
||||
var root map[string]interface{}
|
||||
json.Unmarshal([]byte(entry.Entry), &root)
|
||||
representation := make(map[string]interface{}, 0)
|
||||
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
|
||||
var repRequest []interface{}
|
||||
details := request["details"].(map[string]interface{})
|
||||
switch request["method"].(string) {
|
||||
case basicMethodMap[40]:
|
||||
repRequest = representBasicPublish(details)
|
||||
break
|
||||
case basicMethodMap[60]:
|
||||
repRequest = representBasicDeliver(details)
|
||||
break
|
||||
case queueMethodMap[10]:
|
||||
repRequest = representQueueDeclare(details)
|
||||
break
|
||||
case exchangeMethodMap[10]:
|
||||
repRequest = representExchangeDeclare(details)
|
||||
break
|
||||
case connectionMethodMap[10]:
|
||||
repRequest = representConnectionStart(details)
|
||||
break
|
||||
case connectionMethodMap[50]:
|
||||
repRequest = representConnectionClose(details)
|
||||
break
|
||||
case queueMethodMap[20]:
|
||||
repRequest = representQueueBind(details)
|
||||
break
|
||||
case basicMethodMap[20]:
|
||||
repRequest = representBasicConsume(details)
|
||||
break
|
||||
}
|
||||
representation["request"] = repRequest
|
||||
object, err = json.Marshal(representation)
|
||||
return
|
||||
}
|
||||
|
||||
var Dissector dissecting
|
||||
465
tap/extensions/amqp/read.go
Normal file
465
tap/extensions/amqp/read.go
Normal file
@@ -0,0 +1,465 @@
|
||||
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
// Source code and contact info at http://github.com/streadway/amqp
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
"time"
|
||||
)
|
||||
|
||||
/*
|
||||
Reads a frame from an input stream and returns an interface that can be cast into
|
||||
one of the following:
|
||||
|
||||
MethodFrame
|
||||
PropertiesFrame
|
||||
BodyFrame
|
||||
HeartbeatFrame
|
||||
|
||||
2.3.5 frame Details
|
||||
|
||||
All frames consist of a header (7 octets), a payload of arbitrary size, and a
|
||||
'frame-end' octet that detects malformed frames:
|
||||
|
||||
0 1 3 7 size+7 size+8
|
||||
+------+---------+-------------+ +------------+ +-----------+
|
||||
| type | channel | size | | payload | | frame-end |
|
||||
+------+---------+-------------+ +------------+ +-----------+
|
||||
octet short long size octets octet
|
||||
|
||||
To read a frame, we:
|
||||
1. Read the header and check the frame type and channel.
|
||||
2. Depending on the frame type, we read the payload and process it.
|
||||
3. Read the frame end octet.
|
||||
|
||||
In realistic implementations where performance is a concern, we would use
|
||||
“read-ahead buffering” or
|
||||
|
||||
“gathering reads” to avoid doing three separate system calls to read a frame.
|
||||
*/
|
||||
func (r *AmqpReader) ReadFrame() (frame frame, err error) {
|
||||
var scratch [7]byte
|
||||
|
||||
if _, err = io.ReadFull(r.R, scratch[:7]); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
typ := uint8(scratch[0])
|
||||
channel := binary.BigEndian.Uint16(scratch[1:3])
|
||||
size := binary.BigEndian.Uint32(scratch[3:7])
|
||||
|
||||
if size > 1000000*16 {
|
||||
return nil, ErrMaxSize
|
||||
}
|
||||
|
||||
switch typ {
|
||||
case frameMethod:
|
||||
if frame, err = r.parseMethodFrame(channel, size); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
case frameHeader:
|
||||
if frame, err = r.parseHeaderFrame(channel, size); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
case frameBody:
|
||||
if frame, err = r.parseBodyFrame(channel, size); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
case frameHeartbeat:
|
||||
if frame, err = r.parseHeartbeatFrame(channel, size); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
default:
|
||||
return nil, ErrFrame
|
||||
}
|
||||
|
||||
if _, err = io.ReadFull(r.R, scratch[:1]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if scratch[0] != frameEnd {
|
||||
return nil, ErrFrame
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func readShortstr(r io.Reader) (v string, err error) {
|
||||
var length uint8
|
||||
if err = binary.Read(r, binary.BigEndian, &length); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
bytes := make([]byte, length)
|
||||
if _, err = io.ReadFull(r, bytes); err != nil {
|
||||
return
|
||||
}
|
||||
return string(bytes), nil
|
||||
}
|
||||
|
||||
func readLongstr(r io.Reader) (v string, err error) {
|
||||
var length uint32
|
||||
if err = binary.Read(r, binary.BigEndian, &length); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// slices can't be longer than max int32 value
|
||||
if length > (^uint32(0) >> 1) {
|
||||
return
|
||||
}
|
||||
|
||||
bytes := make([]byte, length)
|
||||
if _, err = io.ReadFull(r, bytes); err != nil {
|
||||
return
|
||||
}
|
||||
return string(bytes), nil
|
||||
}
|
||||
|
||||
func readDecimal(r io.Reader) (v Decimal, err error) {
|
||||
if err = binary.Read(r, binary.BigEndian, &v.Scale); err != nil {
|
||||
return
|
||||
}
|
||||
if err = binary.Read(r, binary.BigEndian, &v.Value); err != nil {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func readFloat32(r io.Reader) (v float32, err error) {
|
||||
if err = binary.Read(r, binary.BigEndian, &v); err != nil {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func readFloat64(r io.Reader) (v float64, err error) {
|
||||
if err = binary.Read(r, binary.BigEndian, &v); err != nil {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func readTimestamp(r io.Reader) (v time.Time, err error) {
|
||||
var sec int64
|
||||
if err = binary.Read(r, binary.BigEndian, &sec); err != nil {
|
||||
return
|
||||
}
|
||||
return time.Unix(sec, 0), nil
|
||||
}
|
||||
|
||||
/*
|
||||
'A': []interface{}
|
||||
'D': Decimal
|
||||
'F': Table
|
||||
'I': int32
|
||||
'S': string
|
||||
'T': time.Time
|
||||
'V': nil
|
||||
'b': byte
|
||||
'd': float64
|
||||
'f': float32
|
||||
'l': int64
|
||||
's': int16
|
||||
't': bool
|
||||
'x': []byte
|
||||
*/
|
||||
func readField(r io.Reader) (v interface{}, err error) {
|
||||
var typ byte
|
||||
if err = binary.Read(r, binary.BigEndian, &typ); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch typ {
|
||||
case 't':
|
||||
var value uint8
|
||||
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
|
||||
return
|
||||
}
|
||||
return (value != 0), nil
|
||||
|
||||
case 'b':
|
||||
var value [1]byte
|
||||
if _, err = io.ReadFull(r, value[0:1]); err != nil {
|
||||
return
|
||||
}
|
||||
return value[0], nil
|
||||
|
||||
case 's':
|
||||
var value int16
|
||||
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
|
||||
return
|
||||
}
|
||||
return value, nil
|
||||
|
||||
case 'I':
|
||||
var value int32
|
||||
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
|
||||
return
|
||||
}
|
||||
return value, nil
|
||||
|
||||
case 'l':
|
||||
var value int64
|
||||
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
|
||||
return
|
||||
}
|
||||
return value, nil
|
||||
|
||||
case 'f':
|
||||
var value float32
|
||||
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
|
||||
return
|
||||
}
|
||||
return value, nil
|
||||
|
||||
case 'd':
|
||||
var value float64
|
||||
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
|
||||
return
|
||||
}
|
||||
return value, nil
|
||||
|
||||
case 'D':
|
||||
return readDecimal(r)
|
||||
|
||||
case 'S':
|
||||
return readLongstr(r)
|
||||
|
||||
case 'A':
|
||||
return readArray(r)
|
||||
|
||||
case 'T':
|
||||
return readTimestamp(r)
|
||||
|
||||
case 'F':
|
||||
return readTable(r)
|
||||
|
||||
case 'x':
|
||||
var len int32
|
||||
if err = binary.Read(r, binary.BigEndian, &len); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
value := make([]byte, len)
|
||||
if _, err = io.ReadFull(r, value); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return value, err
|
||||
|
||||
case 'V':
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return nil, ErrSyntax
|
||||
}
|
||||
|
||||
/*
|
||||
Field tables are long strings that contain packed name-value pairs. The
|
||||
name-value pairs are encoded as short string defining the name, and octet
|
||||
defining the values type and then the value itself. The valid field types for
|
||||
tables are an extension of the native integer, bit, string, and timestamp
|
||||
types, and are shown in the grammar. Multi-octet integer fields are always
|
||||
held in network byte order.
|
||||
*/
|
||||
func readTable(r io.Reader) (table Table, err error) {
|
||||
var nested bytes.Buffer
|
||||
var str string
|
||||
|
||||
if str, err = readLongstr(r); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
nested.Write([]byte(str))
|
||||
|
||||
table = make(Table)
|
||||
|
||||
for nested.Len() > 0 {
|
||||
var key string
|
||||
var value interface{}
|
||||
|
||||
if key, err = readShortstr(&nested); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if value, err = readField(&nested); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
table[key] = value
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func readArray(r io.Reader) ([]interface{}, error) {
|
||||
var (
|
||||
size uint32
|
||||
err error
|
||||
)
|
||||
|
||||
if err = binary.Read(r, binary.BigEndian, &size); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var (
|
||||
lim = &io.LimitedReader{R: r, N: int64(size)}
|
||||
arr = []interface{}{}
|
||||
field interface{}
|
||||
)
|
||||
|
||||
for {
|
||||
if field, err = readField(lim); err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
arr = append(arr, field)
|
||||
}
|
||||
|
||||
return arr, nil
|
||||
}
|
||||
|
||||
// Checks if this bit mask matches the flags bitset
|
||||
func hasProperty(mask uint16, prop int) bool {
|
||||
return int(mask)&prop > 0
|
||||
}
|
||||
|
||||
func (r *AmqpReader) parseHeaderFrame(channel uint16, size uint32) (frame frame, err error) {
|
||||
hf := &HeaderFrame{
|
||||
ChannelId: channel,
|
||||
}
|
||||
|
||||
if err = binary.Read(r.R, binary.BigEndian, &hf.ClassId); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if err = binary.Read(r.R, binary.BigEndian, &hf.weight); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if err = binary.Read(r.R, binary.BigEndian, &hf.Size); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if hf.Size > 512 {
|
||||
return nil, ErrMaxHeaderFrameSize
|
||||
}
|
||||
|
||||
var flags uint16
|
||||
|
||||
if err = binary.Read(r.R, binary.BigEndian, &flags); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if hasProperty(flags, flagContentType) {
|
||||
if hf.Properties.ContentType, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagContentEncoding) {
|
||||
if hf.Properties.ContentEncoding, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagHeaders) {
|
||||
if hf.Properties.Headers, err = readTable(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagDeliveryMode) {
|
||||
if err = binary.Read(r.R, binary.BigEndian, &hf.Properties.DeliveryMode); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagPriority) {
|
||||
if err = binary.Read(r.R, binary.BigEndian, &hf.Properties.Priority); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagCorrelationId) {
|
||||
if hf.Properties.CorrelationId, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagReplyTo) {
|
||||
if hf.Properties.ReplyTo, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagExpiration) {
|
||||
if hf.Properties.Expiration, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagMessageId) {
|
||||
if hf.Properties.MessageId, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagTimestamp) {
|
||||
if hf.Properties.Timestamp, err = readTimestamp(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagType) {
|
||||
if hf.Properties.Type, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagUserId) {
|
||||
if hf.Properties.UserId, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagAppId) {
|
||||
if hf.Properties.AppId, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(flags, flagReserved1) {
|
||||
if hf.Properties.reserved1, err = readShortstr(r.R); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
return hf, nil
|
||||
}
|
||||
|
||||
func (r *AmqpReader) parseBodyFrame(channel uint16, size uint32) (frame frame, err error) {
|
||||
bf := &BodyFrame{
|
||||
ChannelId: channel,
|
||||
Body: make([]byte, size),
|
||||
}
|
||||
|
||||
if _, err = io.ReadFull(r.R, bf.Body); err != nil {
|
||||
bf.Body = nil
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return bf, nil
|
||||
}
|
||||
|
||||
var errHeartbeatPayload = errors.New("Heartbeats should not have a payload")
|
||||
|
||||
func (r *AmqpReader) parseHeartbeatFrame(channel uint16, size uint32) (frame frame, err error) {
|
||||
hf := &HeartbeatFrame{
|
||||
ChannelId: channel,
|
||||
}
|
||||
|
||||
if size > 0 {
|
||||
return nil, errHeartbeatPayload
|
||||
}
|
||||
|
||||
return hf, nil
|
||||
}
|
||||
3309
tap/extensions/amqp/spec091.go
Normal file
3309
tap/extensions/amqp/spec091.go
Normal file
File diff suppressed because it is too large
Load Diff
17
tap/extensions/amqp/structs.go
Normal file
17
tap/extensions/amqp/structs.go
Normal file
@@ -0,0 +1,17 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
)
|
||||
|
||||
type AMQPPayload struct {
|
||||
Data interface{}
|
||||
}
|
||||
|
||||
type AMQPPayloader interface {
|
||||
MarshalJSON() ([]byte, error)
|
||||
}
|
||||
|
||||
func (h AMQPPayload) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(h.Data)
|
||||
}
|
||||
436
tap/extensions/amqp/types.go
Normal file
436
tap/extensions/amqp/types.go
Normal file
@@ -0,0 +1,436 @@
|
||||
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
// Source code and contact info at http://github.com/streadway/amqp
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Constants for standard AMQP 0-9-1 exchange types.
|
||||
const (
|
||||
ExchangeDirect = "direct"
|
||||
ExchangeFanout = "fanout"
|
||||
ExchangeTopic = "topic"
|
||||
ExchangeHeaders = "headers"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrClosed is returned when the channel or connection is not open
|
||||
ErrClosed = &Error{Code: ChannelError, Reason: "channel/connection is not open"}
|
||||
|
||||
// ErrChannelMax is returned when Connection.Channel has been called enough
|
||||
// times that all channel IDs have been exhausted in the client or the
|
||||
// server.
|
||||
ErrChannelMax = &Error{Code: ChannelError, Reason: "channel id space exhausted"}
|
||||
|
||||
// ErrSASL is returned from Dial when the authentication mechanism could not
|
||||
// be negoated.
|
||||
ErrSASL = &Error{Code: AccessRefused, Reason: "SASL could not negotiate a shared mechanism"}
|
||||
|
||||
// ErrCredentials is returned when the authenticated client is not authorized
|
||||
// to any vhost.
|
||||
ErrCredentials = &Error{Code: AccessRefused, Reason: "username or password not allowed"}
|
||||
|
||||
// ErrVhost is returned when the authenticated user is not permitted to
|
||||
// access the requested Vhost.
|
||||
ErrVhost = &Error{Code: AccessRefused, Reason: "no access to this vhost"}
|
||||
|
||||
// ErrSyntax is hard protocol error, indicating an unsupported protocol,
|
||||
// implementation or encoding.
|
||||
ErrSyntax = &Error{Code: SyntaxError, Reason: "invalid field or value inside of a frame"}
|
||||
|
||||
// ErrFrame is returned when the protocol frame cannot be read from the
|
||||
// server, indicating an unsupported protocol or unsupported frame type.
|
||||
ErrFrame = &Error{Code: FrameError, Reason: "frame could not be parsed"}
|
||||
|
||||
// ErrCommandInvalid is returned when the server sends an unexpected response
|
||||
// to this requested message type. This indicates a bug in this client.
|
||||
ErrCommandInvalid = &Error{Code: CommandInvalid, Reason: "unexpected command received"}
|
||||
|
||||
// ErrUnexpectedFrame is returned when something other than a method or
|
||||
// heartbeat frame is delivered to the Connection, indicating a bug in the
|
||||
// client.
|
||||
ErrUnexpectedFrame = &Error{Code: UnexpectedFrame, Reason: "unexpected frame received"}
|
||||
|
||||
// ErrFieldType is returned when writing a message containing a Go type unsupported by AMQP.
|
||||
ErrFieldType = &Error{Code: SyntaxError, Reason: "unsupported table field type"}
|
||||
|
||||
ErrMaxSize = &Error{Code: MaxSizeError, Reason: "an AMQP message cannot be bigger than 16MB"}
|
||||
|
||||
ErrMaxHeaderFrameSize = &Error{Code: MaxHeaderFrameSizeError, Reason: "an AMQP header cannot be bigger than 512 bytes"}
|
||||
|
||||
ErrBadMethodFrameUnknownMethod = &Error{Code: BadMethodFrameUnknownMethod, Reason: "Bad method frame, unknown method"}
|
||||
|
||||
ErrBadMethodFrameUnknownClass = &Error{Code: BadMethodFrameUnknownClass, Reason: "Bad method frame, unknown class"}
|
||||
)
|
||||
|
||||
// Error captures the code and reason a channel or connection has been closed
|
||||
// by the server.
|
||||
type Error struct {
|
||||
Code int // constant code from the specification
|
||||
Reason string // description of the error
|
||||
Server bool // true when initiated from the server, false when from this library
|
||||
Recover bool // true when this error can be recovered by retrying later or with different parameters
|
||||
}
|
||||
|
||||
func newError(code uint16, text string) *Error {
|
||||
return &Error{
|
||||
Code: int(code),
|
||||
Reason: text,
|
||||
Recover: isSoftExceptionCode(int(code)),
|
||||
Server: true,
|
||||
}
|
||||
}
|
||||
|
||||
func (e Error) Error() string {
|
||||
return fmt.Sprintf("Exception (%d) Reason: %q", e.Code, e.Reason)
|
||||
}
|
||||
|
||||
// Used by header frames to capture routing and header information
|
||||
type Properties struct {
|
||||
ContentType string // MIME content type
|
||||
ContentEncoding string // MIME content encoding
|
||||
Headers Table // Application or header exchange table
|
||||
DeliveryMode uint8 // queue implementation use - Transient (1) or Persistent (2)
|
||||
Priority uint8 // queue implementation use - 0 to 9
|
||||
CorrelationId string // application use - correlation identifier
|
||||
ReplyTo string // application use - address to to reply to (ex: RPC)
|
||||
Expiration string // implementation use - message expiration spec
|
||||
MessageId string // application use - message identifier
|
||||
Timestamp time.Time // application use - message timestamp
|
||||
Type string // application use - message type name
|
||||
UserId string // application use - creating user id
|
||||
AppId string // application use - creating application
|
||||
reserved1 string // was cluster-id - process for buffer consumption
|
||||
}
|
||||
|
||||
// DeliveryMode. Transient means higher throughput but messages will not be
|
||||
// restored on broker restart. The delivery mode of publishings is unrelated
|
||||
// to the durability of the queues they reside on. Transient messages will
|
||||
// not be restored to durable queues, persistent messages will be restored to
|
||||
// durable queues and lost on non-durable queues during server restart.
|
||||
//
|
||||
// This remains typed as uint8 to match Publishing.DeliveryMode. Other
|
||||
// delivery modes specific to custom queue implementations are not enumerated
|
||||
// here.
|
||||
const (
|
||||
Transient uint8 = 1
|
||||
Persistent uint8 = 2
|
||||
)
|
||||
|
||||
// The property flags are an array of bits that indicate the presence or
|
||||
// absence of each property value in sequence. The bits are ordered from most
|
||||
// high to low - bit 15 indicates the first property.
|
||||
const (
|
||||
flagContentType = 0x8000
|
||||
flagContentEncoding = 0x4000
|
||||
flagHeaders = 0x2000
|
||||
flagDeliveryMode = 0x1000
|
||||
flagPriority = 0x0800
|
||||
flagCorrelationId = 0x0400
|
||||
flagReplyTo = 0x0200
|
||||
flagExpiration = 0x0100
|
||||
flagMessageId = 0x0080
|
||||
flagTimestamp = 0x0040
|
||||
flagType = 0x0020
|
||||
flagUserId = 0x0010
|
||||
flagAppId = 0x0008
|
||||
flagReserved1 = 0x0004
|
||||
)
|
||||
|
||||
// Queue captures the current server state of the queue on the server returned
|
||||
// from Channel.QueueDeclare or Channel.QueueInspect.
|
||||
type Queue struct {
|
||||
Name string // server confirmed or generated name
|
||||
Messages int // count of messages not awaiting acknowledgment
|
||||
Consumers int // number of consumers receiving deliveries
|
||||
}
|
||||
|
||||
// Publishing captures the client message sent to the server. The fields
|
||||
// outside of the Headers table included in this struct mirror the underlying
|
||||
// fields in the content frame. They use native types for convenience and
|
||||
// efficiency.
|
||||
type Publishing struct {
|
||||
// Application or exchange specific fields,
|
||||
// the headers exchange will inspect this field.
|
||||
Headers Table
|
||||
|
||||
// Properties
|
||||
ContentType string // MIME content type
|
||||
ContentEncoding string // MIME content encoding
|
||||
DeliveryMode uint8 // Transient (0 or 1) or Persistent (2)
|
||||
Priority uint8 // 0 to 9
|
||||
CorrelationId string // correlation identifier
|
||||
ReplyTo string // address to to reply to (ex: RPC)
|
||||
Expiration string // message expiration spec
|
||||
MessageId string // message identifier
|
||||
Timestamp time.Time // message timestamp
|
||||
Type string // message type name
|
||||
UserId string // creating user id - ex: "guest"
|
||||
AppId string // creating application id
|
||||
|
||||
// The application specific payload of the message
|
||||
Body []byte
|
||||
}
|
||||
|
||||
// Blocking notifies the server's TCP flow control of the Connection. When a
|
||||
// server hits a memory or disk alarm it will block all connections until the
|
||||
// resources are reclaimed. Use NotifyBlock on the Connection to receive these
|
||||
// events.
|
||||
type Blocking struct {
|
||||
Active bool // TCP pushback active/inactive on server
|
||||
Reason string // Server reason for activation
|
||||
}
|
||||
|
||||
// Confirmation notifies the acknowledgment or negative acknowledgement of a
|
||||
// publishing identified by its delivery tag. Use NotifyPublish on the Channel
|
||||
// to consume these events.
|
||||
type Confirmation struct {
|
||||
DeliveryTag uint64 // A 1 based counter of publishings from when the channel was put in Confirm mode
|
||||
Ack bool // True when the server successfully received the publishing
|
||||
}
|
||||
|
||||
// Decimal matches the AMQP decimal type. Scale is the number of decimal
|
||||
// digits Scale == 2, Value == 12345, Decimal == 123.45
|
||||
type Decimal struct {
|
||||
Scale uint8
|
||||
Value int32
|
||||
}
|
||||
|
||||
// Table stores user supplied fields of the following types:
|
||||
//
|
||||
// bool
|
||||
// byte
|
||||
// float32
|
||||
// float64
|
||||
// int
|
||||
// int16
|
||||
// int32
|
||||
// int64
|
||||
// nil
|
||||
// string
|
||||
// time.Time
|
||||
// amqp.Decimal
|
||||
// amqp.Table
|
||||
// []byte
|
||||
// []interface{} - containing above types
|
||||
//
|
||||
// Functions taking a table will immediately fail when the table contains a
|
||||
// value of an unsupported type.
|
||||
//
|
||||
// The caller must be specific in which precision of integer it wishes to
|
||||
// encode.
|
||||
//
|
||||
// Use a type assertion when reading values from a table for type conversion.
|
||||
//
|
||||
// RabbitMQ expects int32 for integer values.
|
||||
//
|
||||
type Table map[string]interface{}
|
||||
|
||||
func validateField(f interface{}) error {
|
||||
switch fv := f.(type) {
|
||||
case nil, bool, byte, int, int16, int32, int64, float32, float64, string, []byte, Decimal, time.Time:
|
||||
return nil
|
||||
|
||||
case []interface{}:
|
||||
for _, v := range fv {
|
||||
if err := validateField(v); err != nil {
|
||||
return fmt.Errorf("in array %s", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
case Table:
|
||||
for k, v := range fv {
|
||||
if err := validateField(v); err != nil {
|
||||
return fmt.Errorf("table field %q %s", k, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
return fmt.Errorf("value %T not supported", f)
|
||||
}
|
||||
|
||||
// Validate returns and error if any Go types in the table are incompatible with AMQP types.
|
||||
func (t Table) Validate() error {
|
||||
return validateField(t)
|
||||
}
|
||||
|
||||
// Heap interface for maintaining delivery tags
|
||||
type tagSet []uint64
|
||||
|
||||
func (set tagSet) Len() int { return len(set) }
|
||||
func (set tagSet) Less(i, j int) bool { return (set)[i] < (set)[j] }
|
||||
func (set tagSet) Swap(i, j int) { (set)[i], (set)[j] = (set)[j], (set)[i] }
|
||||
func (set *tagSet) Push(tag interface{}) { *set = append(*set, tag.(uint64)) }
|
||||
func (set *tagSet) Pop() interface{} {
|
||||
val := (*set)[len(*set)-1]
|
||||
*set = (*set)[:len(*set)-1]
|
||||
return val
|
||||
}
|
||||
|
||||
type Message interface {
|
||||
id() (uint16, uint16)
|
||||
wait() bool
|
||||
read(io.Reader) error
|
||||
write(io.Writer) error
|
||||
}
|
||||
|
||||
type messageWithContent interface {
|
||||
Message
|
||||
getContent() (Properties, []byte)
|
||||
setContent(Properties, []byte)
|
||||
}
|
||||
|
||||
/*
|
||||
The base interface implemented as:
|
||||
|
||||
2.3.5 frame Details
|
||||
|
||||
All frames consist of a header (7 octets), a payload of arbitrary size, and a 'frame-end' octet that detects
|
||||
malformed frames:
|
||||
|
||||
0 1 3 7 size+7 size+8
|
||||
+------+---------+-------------+ +------------+ +-----------+
|
||||
| type | channel | size | | payload | | frame-end |
|
||||
+------+---------+-------------+ +------------+ +-----------+
|
||||
octet short long size octets octet
|
||||
|
||||
To read a frame, we:
|
||||
|
||||
1. Read the header and check the frame type and channel.
|
||||
2. Depending on the frame type, we read the payload and process it.
|
||||
3. Read the frame end octet.
|
||||
|
||||
In realistic implementations where performance is a concern, we would use
|
||||
“read-ahead buffering” or “gathering reads” to avoid doing three separate
|
||||
system calls to read a frame.
|
||||
|
||||
*/
|
||||
type frame interface {
|
||||
write(io.Writer) error
|
||||
channel() uint16
|
||||
}
|
||||
|
||||
type AmqpReader struct {
|
||||
R io.Reader
|
||||
}
|
||||
|
||||
type writer struct {
|
||||
w io.Writer
|
||||
}
|
||||
|
||||
// Implements the frame interface for Connection RPC
|
||||
type protocolHeader struct{}
|
||||
|
||||
func (protocolHeader) write(w io.Writer) error {
|
||||
_, err := w.Write([]byte{'A', 'M', 'Q', 'P', 0, 0, 9, 1})
|
||||
return err
|
||||
}
|
||||
|
||||
func (protocolHeader) channel() uint16 {
|
||||
panic("only valid as initial handshake")
|
||||
}
|
||||
|
||||
/*
|
||||
Method frames carry the high-level protocol commands (which we call "methods").
|
||||
One method frame carries one command. The method frame payload has this format:
|
||||
|
||||
0 2 4
|
||||
+----------+-----------+-------------- - -
|
||||
| class-id | method-id | arguments...
|
||||
+----------+-----------+-------------- - -
|
||||
short short ...
|
||||
|
||||
To process a method frame, we:
|
||||
1. Read the method frame payload.
|
||||
2. Unpack it into a structure. A given method always has the same structure,
|
||||
so we can unpack the method rapidly. 3. Check that the method is allowed in
|
||||
the current context.
|
||||
4. Check that the method arguments are valid.
|
||||
5. Execute the method.
|
||||
|
||||
Method frame bodies are constructed as a list of AMQP data fields (bits,
|
||||
integers, strings and string tables). The marshalling code is trivially
|
||||
generated directly from the protocol specifications, and can be very rapid.
|
||||
*/
|
||||
type MethodFrame struct {
|
||||
ChannelId uint16
|
||||
ClassId uint16
|
||||
MethodId uint16
|
||||
Method Message
|
||||
}
|
||||
|
||||
func (f *MethodFrame) channel() uint16 { return f.ChannelId }
|
||||
|
||||
/*
|
||||
Heartbeating is a technique designed to undo one of TCP/IP's features, namely
|
||||
its ability to recover from a broken physical connection by closing only after
|
||||
a quite long time-out. In some scenarios we need to know very rapidly if a
|
||||
peer is disconnected or not responding for other reasons (e.g. it is looping).
|
||||
Since heartbeating can be done at a low level, we implement this as a special
|
||||
type of frame that peers exchange at the transport level, rather than as a
|
||||
class method.
|
||||
*/
|
||||
type HeartbeatFrame struct {
|
||||
ChannelId uint16
|
||||
}
|
||||
|
||||
func (f *HeartbeatFrame) channel() uint16 { return f.ChannelId }
|
||||
|
||||
/*
|
||||
Certain methods (such as Basic.Publish, Basic.Deliver, etc.) are formally
|
||||
defined as carrying content. When a peer sends such a method frame, it always
|
||||
follows it with a content header and zero or more content body frames.
|
||||
|
||||
A content header frame has this format:
|
||||
|
||||
0 2 4 12 14
|
||||
+----------+--------+-----------+----------------+------------- - -
|
||||
| class-id | weight | body size | property flags | property list...
|
||||
+----------+--------+-----------+----------------+------------- - -
|
||||
short short long long short remainder...
|
||||
|
||||
We place content body in distinct frames (rather than including it in the
|
||||
method) so that AMQP may support "zero copy" techniques in which content is
|
||||
never marshalled or encoded. We place the content properties in their own
|
||||
frame so that recipients can selectively discard contents they do not want to
|
||||
process
|
||||
*/
|
||||
type HeaderFrame struct {
|
||||
ChannelId uint16
|
||||
ClassId uint16
|
||||
weight uint16
|
||||
Size uint64
|
||||
Properties Properties
|
||||
}
|
||||
|
||||
func (f *HeaderFrame) channel() uint16 { return f.ChannelId }
|
||||
|
||||
/*
|
||||
Content is the application data we carry from client-to-client via the AMQP
|
||||
server. Content is, roughly speaking, a set of properties plus a binary data
|
||||
part. The set of allowed properties are defined by the Basic class, and these
|
||||
form the "content header frame". The data can be any size, and MAY be broken
|
||||
into several (or many) chunks, each forming a "content body frame".
|
||||
|
||||
Looking at the frames for a specific channel, as they pass on the wire, we
|
||||
might see something like this:
|
||||
|
||||
[method]
|
||||
[method] [header] [body] [body]
|
||||
[method]
|
||||
...
|
||||
*/
|
||||
type BodyFrame struct {
|
||||
ChannelId uint16
|
||||
Body []byte
|
||||
}
|
||||
|
||||
func (f *BodyFrame) channel() uint16 { return f.ChannelId }
|
||||
416
tap/extensions/amqp/write.go
Normal file
416
tap/extensions/amqp/write.go
Normal file
@@ -0,0 +1,416 @@
|
||||
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
// Source code and contact info at http://github.com/streadway/amqp
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
"math"
|
||||
"time"
|
||||
)
|
||||
|
||||
func (w *writer) WriteFrame(frame frame) (err error) {
|
||||
if err = frame.write(w.w); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if buf, ok := w.w.(*bufio.Writer); ok {
|
||||
err = buf.Flush()
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (f *MethodFrame) write(w io.Writer) (err error) {
|
||||
var payload bytes.Buffer
|
||||
|
||||
if f.Method == nil {
|
||||
return errors.New("malformed frame: missing method")
|
||||
}
|
||||
|
||||
class, method := f.Method.id()
|
||||
|
||||
if err = binary.Write(&payload, binary.BigEndian, class); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if err = binary.Write(&payload, binary.BigEndian, method); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if err = f.Method.write(&payload); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return writeFrame(w, frameMethod, f.ChannelId, payload.Bytes())
|
||||
}
|
||||
|
||||
// Heartbeat
|
||||
//
|
||||
// Payload is empty
|
||||
func (f *HeartbeatFrame) write(w io.Writer) (err error) {
|
||||
return writeFrame(w, frameHeartbeat, f.ChannelId, []byte{})
|
||||
}
|
||||
|
||||
// CONTENT HEADER
|
||||
// 0 2 4 12 14
|
||||
// +----------+--------+-----------+----------------+------------- - -
|
||||
// | class-id | weight | body size | property flags | property list...
|
||||
// +----------+--------+-----------+----------------+------------- - -
|
||||
// short short long long short remainder...
|
||||
//
|
||||
func (f *HeaderFrame) write(w io.Writer) (err error) {
|
||||
var payload bytes.Buffer
|
||||
var zeroTime time.Time
|
||||
|
||||
if err = binary.Write(&payload, binary.BigEndian, f.ClassId); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if err = binary.Write(&payload, binary.BigEndian, f.weight); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if err = binary.Write(&payload, binary.BigEndian, f.Size); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// First pass will build the mask to be serialized, second pass will serialize
|
||||
// each of the fields that appear in the mask.
|
||||
|
||||
var mask uint16
|
||||
|
||||
if len(f.Properties.ContentType) > 0 {
|
||||
mask = mask | flagContentType
|
||||
}
|
||||
if len(f.Properties.ContentEncoding) > 0 {
|
||||
mask = mask | flagContentEncoding
|
||||
}
|
||||
if f.Properties.Headers != nil && len(f.Properties.Headers) > 0 {
|
||||
mask = mask | flagHeaders
|
||||
}
|
||||
if f.Properties.DeliveryMode > 0 {
|
||||
mask = mask | flagDeliveryMode
|
||||
}
|
||||
if f.Properties.Priority > 0 {
|
||||
mask = mask | flagPriority
|
||||
}
|
||||
if len(f.Properties.CorrelationId) > 0 {
|
||||
mask = mask | flagCorrelationId
|
||||
}
|
||||
if len(f.Properties.ReplyTo) > 0 {
|
||||
mask = mask | flagReplyTo
|
||||
}
|
||||
if len(f.Properties.Expiration) > 0 {
|
||||
mask = mask | flagExpiration
|
||||
}
|
||||
if len(f.Properties.MessageId) > 0 {
|
||||
mask = mask | flagMessageId
|
||||
}
|
||||
if f.Properties.Timestamp != zeroTime {
|
||||
mask = mask | flagTimestamp
|
||||
}
|
||||
if len(f.Properties.Type) > 0 {
|
||||
mask = mask | flagType
|
||||
}
|
||||
if len(f.Properties.UserId) > 0 {
|
||||
mask = mask | flagUserId
|
||||
}
|
||||
if len(f.Properties.AppId) > 0 {
|
||||
mask = mask | flagAppId
|
||||
}
|
||||
|
||||
if err = binary.Write(&payload, binary.BigEndian, mask); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if hasProperty(mask, flagContentType) {
|
||||
if err = writeShortstr(&payload, f.Properties.ContentType); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagContentEncoding) {
|
||||
if err = writeShortstr(&payload, f.Properties.ContentEncoding); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagHeaders) {
|
||||
if err = writeTable(&payload, f.Properties.Headers); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagDeliveryMode) {
|
||||
if err = binary.Write(&payload, binary.BigEndian, f.Properties.DeliveryMode); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagPriority) {
|
||||
if err = binary.Write(&payload, binary.BigEndian, f.Properties.Priority); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagCorrelationId) {
|
||||
if err = writeShortstr(&payload, f.Properties.CorrelationId); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagReplyTo) {
|
||||
if err = writeShortstr(&payload, f.Properties.ReplyTo); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagExpiration) {
|
||||
if err = writeShortstr(&payload, f.Properties.Expiration); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagMessageId) {
|
||||
if err = writeShortstr(&payload, f.Properties.MessageId); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagTimestamp) {
|
||||
if err = binary.Write(&payload, binary.BigEndian, uint64(f.Properties.Timestamp.Unix())); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagType) {
|
||||
if err = writeShortstr(&payload, f.Properties.Type); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagUserId) {
|
||||
if err = writeShortstr(&payload, f.Properties.UserId); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if hasProperty(mask, flagAppId) {
|
||||
if err = writeShortstr(&payload, f.Properties.AppId); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
return writeFrame(w, frameHeader, f.ChannelId, payload.Bytes())
|
||||
}
|
||||
|
||||
// Body
|
||||
//
|
||||
// Payload is one byterange from the full body who's size is declared in the
|
||||
// Header frame
|
||||
func (f *BodyFrame) write(w io.Writer) (err error) {
|
||||
return writeFrame(w, frameBody, f.ChannelId, f.Body)
|
||||
}
|
||||
|
||||
func writeFrame(w io.Writer, typ uint8, channel uint16, payload []byte) (err error) {
|
||||
end := []byte{frameEnd}
|
||||
size := uint(len(payload))
|
||||
|
||||
_, err = w.Write([]byte{
|
||||
byte(typ),
|
||||
byte((channel & 0xff00) >> 8),
|
||||
byte((channel & 0x00ff) >> 0),
|
||||
byte((size & 0xff000000) >> 24),
|
||||
byte((size & 0x00ff0000) >> 16),
|
||||
byte((size & 0x0000ff00) >> 8),
|
||||
byte((size & 0x000000ff) >> 0),
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err = w.Write(payload); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err = w.Write(end); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func writeShortstr(w io.Writer, s string) (err error) {
|
||||
b := []byte(s)
|
||||
|
||||
var length = uint8(len(b))
|
||||
|
||||
if err = binary.Write(w, binary.BigEndian, length); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err = w.Write(b[:length]); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func writeLongstr(w io.Writer, s string) (err error) {
|
||||
b := []byte(s)
|
||||
|
||||
var length = uint32(len(b))
|
||||
|
||||
if err = binary.Write(w, binary.BigEndian, length); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err = w.Write(b[:length]); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
/*
|
||||
'A': []interface{}
|
||||
'D': Decimal
|
||||
'F': Table
|
||||
'I': int32
|
||||
'S': string
|
||||
'T': time.Time
|
||||
'V': nil
|
||||
'b': byte
|
||||
'd': float64
|
||||
'f': float32
|
||||
'l': int64
|
||||
's': int16
|
||||
't': bool
|
||||
'x': []byte
|
||||
*/
|
||||
func writeField(w io.Writer, value interface{}) (err error) {
|
||||
var buf [9]byte
|
||||
var enc []byte
|
||||
|
||||
switch v := value.(type) {
|
||||
case bool:
|
||||
buf[0] = 't'
|
||||
if v {
|
||||
buf[1] = byte(1)
|
||||
} else {
|
||||
buf[1] = byte(0)
|
||||
}
|
||||
enc = buf[:2]
|
||||
|
||||
case byte:
|
||||
buf[0] = 'b'
|
||||
buf[1] = byte(v)
|
||||
enc = buf[:2]
|
||||
|
||||
case int16:
|
||||
buf[0] = 's'
|
||||
binary.BigEndian.PutUint16(buf[1:3], uint16(v))
|
||||
enc = buf[:3]
|
||||
|
||||
case int:
|
||||
buf[0] = 'I'
|
||||
binary.BigEndian.PutUint32(buf[1:5], uint32(v))
|
||||
enc = buf[:5]
|
||||
|
||||
case int32:
|
||||
buf[0] = 'I'
|
||||
binary.BigEndian.PutUint32(buf[1:5], uint32(v))
|
||||
enc = buf[:5]
|
||||
|
||||
case int64:
|
||||
buf[0] = 'l'
|
||||
binary.BigEndian.PutUint64(buf[1:9], uint64(v))
|
||||
enc = buf[:9]
|
||||
|
||||
case float32:
|
||||
buf[0] = 'f'
|
||||
binary.BigEndian.PutUint32(buf[1:5], math.Float32bits(v))
|
||||
enc = buf[:5]
|
||||
|
||||
case float64:
|
||||
buf[0] = 'd'
|
||||
binary.BigEndian.PutUint64(buf[1:9], math.Float64bits(v))
|
||||
enc = buf[:9]
|
||||
|
||||
case Decimal:
|
||||
buf[0] = 'D'
|
||||
buf[1] = byte(v.Scale)
|
||||
binary.BigEndian.PutUint32(buf[2:6], uint32(v.Value))
|
||||
enc = buf[:6]
|
||||
|
||||
case string:
|
||||
buf[0] = 'S'
|
||||
binary.BigEndian.PutUint32(buf[1:5], uint32(len(v)))
|
||||
enc = append(buf[:5], []byte(v)...)
|
||||
|
||||
case []interface{}: // field-array
|
||||
buf[0] = 'A'
|
||||
|
||||
sec := new(bytes.Buffer)
|
||||
for _, val := range v {
|
||||
if err = writeField(sec, val); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
binary.BigEndian.PutUint32(buf[1:5], uint32(sec.Len()))
|
||||
if _, err = w.Write(buf[:5]); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err = w.Write(sec.Bytes()); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
|
||||
case time.Time:
|
||||
buf[0] = 'T'
|
||||
binary.BigEndian.PutUint64(buf[1:9], uint64(v.Unix()))
|
||||
enc = buf[:9]
|
||||
|
||||
case Table:
|
||||
if _, err = w.Write([]byte{'F'}); err != nil {
|
||||
return
|
||||
}
|
||||
return writeTable(w, v)
|
||||
|
||||
case []byte:
|
||||
buf[0] = 'x'
|
||||
binary.BigEndian.PutUint32(buf[1:5], uint32(len(v)))
|
||||
if _, err = w.Write(buf[0:5]); err != nil {
|
||||
return
|
||||
}
|
||||
if _, err = w.Write(v); err != nil {
|
||||
return
|
||||
}
|
||||
return
|
||||
|
||||
case nil:
|
||||
buf[0] = 'V'
|
||||
enc = buf[:1]
|
||||
|
||||
default:
|
||||
return ErrFieldType
|
||||
}
|
||||
|
||||
_, err = w.Write(enc)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func writeTable(w io.Writer, table Table) (err error) {
|
||||
var buf bytes.Buffer
|
||||
|
||||
for key, val := range table {
|
||||
if err = writeShortstr(&buf, key); err != nil {
|
||||
return
|
||||
}
|
||||
if err = writeField(&buf, val); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
return writeLongstr(w, string(buf.Bytes()))
|
||||
}
|
||||
13
tap/extensions/http/go.mod
Normal file
13
tap/extensions/http/go.mod
Normal file
@@ -0,0 +1,13 @@
|
||||
module github.com/up9inc/mizu/tap/extensions/http
|
||||
|
||||
go 1.16
|
||||
|
||||
require (
|
||||
github.com/google/martian v2.1.0+incompatible
|
||||
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7
|
||||
github.com/up9inc/mizu/tap/api v0.0.0
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7
|
||||
golang.org/x/text v0.3.5 // indirect
|
||||
)
|
||||
|
||||
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api
|
||||
12
tap/extensions/http/go.sum
Normal file
12
tap/extensions/http/go.sum
Normal file
@@ -0,0 +1,12 @@
|
||||
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7 h1:jkvpcEatpwuMF5O5LVxTnehj6YZ/aEZN4NWD/Xml4pI=
|
||||
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7/go.mod h1:KTrHyWpO1sevuXPZwyeZc72ddWRFqNSKDFl7uVWKpg0=
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
|
||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
@@ -1,4 +1,4 @@
|
||||
package tap
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
@@ -17,17 +17,19 @@ import (
|
||||
)
|
||||
|
||||
const frameHeaderLen = 9
|
||||
|
||||
var clientPreface = []byte(http2.ClientPreface)
|
||||
|
||||
const initialHeaderTableSize = 4096
|
||||
const protoHTTP2 = "HTTP/2.0"
|
||||
const protoMajorHTTP2 = 2
|
||||
const protoMinorHTTP2 = 0
|
||||
|
||||
var maxHTTP2DataLen int = maxHTTP2DataLenDefault // value initialized during init
|
||||
var maxHTTP2DataLen = 1 * 1024 * 1024 // 1MB
|
||||
|
||||
type messageFragment struct {
|
||||
headers []hpack.HeaderField
|
||||
data []byte
|
||||
data []byte
|
||||
}
|
||||
|
||||
type fragmentsByStream map[uint32]*messageFragment
|
||||
@@ -46,7 +48,7 @@ func (fbs *fragmentsByStream) appendFrame(streamID uint32, frame http2.Frame) {
|
||||
if existingFragment, ok := (*fbs)[streamID]; ok {
|
||||
existingDataLen := len(existingFragment.data)
|
||||
// Never save more than maxHTTP2DataLen bytes
|
||||
numBytesToAppend := int(math.Min(float64(maxHTTP2DataLen - existingDataLen), float64(newDataLen)))
|
||||
numBytesToAppend := int(math.Min(float64(maxHTTP2DataLen-existingDataLen), float64(newDataLen)))
|
||||
|
||||
existingFragment.data = append(existingFragment.data, frame.Data()[:numBytesToAppend]...)
|
||||
} else {
|
||||
@@ -69,19 +71,19 @@ func (fbs *fragmentsByStream) pop(streamID uint32) ([]hpack.HeaderField, []byte)
|
||||
return headers, data
|
||||
}
|
||||
|
||||
func createGrpcAssembler(b *bufio.Reader) GrpcAssembler {
|
||||
func createGrpcAssembler(b *bufio.Reader) *GrpcAssembler {
|
||||
var framerOutput bytes.Buffer
|
||||
framer := http2.NewFramer(&framerOutput, b)
|
||||
framer.ReadMetaHeaders = hpack.NewDecoder(initialHeaderTableSize, nil)
|
||||
return GrpcAssembler{
|
||||
return &GrpcAssembler{
|
||||
fragmentsByStream: make(fragmentsByStream),
|
||||
framer: framer,
|
||||
framer: framer,
|
||||
}
|
||||
}
|
||||
|
||||
type GrpcAssembler struct {
|
||||
fragmentsByStream fragmentsByStream
|
||||
framer *http2.Framer
|
||||
framer *http2.Framer
|
||||
}
|
||||
|
||||
func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
|
||||
@@ -118,26 +120,26 @@ func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
|
||||
var messageHTTP1 interface{}
|
||||
if _, ok := headersHTTP1[":method"]; ok {
|
||||
messageHTTP1 = http.Request{
|
||||
URL: &url.URL{},
|
||||
Method: "POST",
|
||||
Header: headersHTTP1,
|
||||
Proto: protoHTTP2,
|
||||
ProtoMajor: protoMajorHTTP2,
|
||||
ProtoMinor: protoMinorHTTP2,
|
||||
Body: io.NopCloser(strings.NewReader(dataString)),
|
||||
URL: &url.URL{},
|
||||
Method: "POST",
|
||||
Header: headersHTTP1,
|
||||
Proto: protoHTTP2,
|
||||
ProtoMajor: protoMajorHTTP2,
|
||||
ProtoMinor: protoMinorHTTP2,
|
||||
Body: io.NopCloser(strings.NewReader(dataString)),
|
||||
ContentLength: int64(len(dataString)),
|
||||
}
|
||||
} else if _, ok := headersHTTP1[":status"]; ok {
|
||||
messageHTTP1 = http.Response{
|
||||
Header: headersHTTP1,
|
||||
Proto: protoHTTP2,
|
||||
ProtoMajor: protoMajorHTTP2,
|
||||
ProtoMinor: protoMinorHTTP2,
|
||||
Body: io.NopCloser(strings.NewReader(dataString)),
|
||||
Header: headersHTTP1,
|
||||
Proto: protoHTTP2,
|
||||
ProtoMajor: protoMajorHTTP2,
|
||||
ProtoMinor: protoMinorHTTP2,
|
||||
Body: io.NopCloser(strings.NewReader(dataString)),
|
||||
ContentLength: int64(len(dataString)),
|
||||
}
|
||||
} else {
|
||||
return 0, nil, errors.New("Failed to assemble stream: neither a request nor a message")
|
||||
return 0, nil, errors.New("failed to assemble stream: neither a request nor a message")
|
||||
}
|
||||
|
||||
return streamID, messageHTTP1, nil
|
||||
@@ -225,7 +227,7 @@ func checkClientPreface(b *bufio.Reader) (bool, error) {
|
||||
func discardClientPreface(b *bufio.Reader) error {
|
||||
if isClientPrefacePresent, err := checkClientPreface(b); err != nil {
|
||||
return err
|
||||
} else if !isClientPrefacePresent{
|
||||
} else if !isClientPrefacePresent {
|
||||
return errors.New("discardClientPreface: does not begin with client preface")
|
||||
}
|
||||
|
||||
163
tap/extensions/http/handlers.go
Normal file
163
tap/extensions/http/handlers.go
Normal file
@@ -0,0 +1,163 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
|
||||
"github.com/romana/rlog"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
func handleHTTP2Stream(grpcAssembler *GrpcAssembler, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter) error {
|
||||
streamID, messageHTTP1, err := grpcAssembler.readMessage()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var item *api.OutputChannelItem
|
||||
|
||||
switch messageHTTP1 := messageHTTP1.(type) {
|
||||
case http.Request:
|
||||
ident := fmt.Sprintf(
|
||||
"%s->%s %s->%s %d",
|
||||
tcpID.SrcIP,
|
||||
tcpID.DstIP,
|
||||
tcpID.SrcPort,
|
||||
tcpID.DstPort,
|
||||
streamID,
|
||||
)
|
||||
item = reqResMatcher.registerRequest(ident, &messageHTTP1, superTimer.CaptureTime)
|
||||
if item != nil {
|
||||
item.ConnectionInfo = &api.ConnectionInfo{
|
||||
ClientIP: tcpID.SrcIP,
|
||||
ClientPort: tcpID.SrcPort,
|
||||
ServerIP: tcpID.DstIP,
|
||||
ServerPort: tcpID.DstPort,
|
||||
IsOutgoing: true,
|
||||
}
|
||||
}
|
||||
case http.Response:
|
||||
ident := fmt.Sprintf(
|
||||
"%s->%s %s->%s %d",
|
||||
tcpID.DstIP,
|
||||
tcpID.SrcIP,
|
||||
tcpID.DstPort,
|
||||
tcpID.SrcPort,
|
||||
streamID,
|
||||
)
|
||||
item = reqResMatcher.registerResponse(ident, &messageHTTP1, superTimer.CaptureTime)
|
||||
if item != nil {
|
||||
item.ConnectionInfo = &api.ConnectionInfo{
|
||||
ClientIP: tcpID.DstIP,
|
||||
ClientPort: tcpID.DstPort,
|
||||
ServerIP: tcpID.SrcIP,
|
||||
ServerPort: tcpID.SrcPort,
|
||||
IsOutgoing: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if item != nil {
|
||||
item.Protocol = http2Protocol
|
||||
emitter.Emit(item)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) error {
|
||||
req, err := http.ReadRequest(b)
|
||||
if err != nil {
|
||||
// log.Println("Error reading stream:", err)
|
||||
return err
|
||||
}
|
||||
counterPair.Request++
|
||||
|
||||
body, err := ioutil.ReadAll(req.Body)
|
||||
req.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
|
||||
s := len(body)
|
||||
if err != nil {
|
||||
rlog.Debugf("[HTTP-request-body] stream %s Got body err: %s", tcpID.Ident, err)
|
||||
}
|
||||
if err := req.Body.Close(); err != nil {
|
||||
rlog.Debugf("[HTTP-request-body-close] stream %s Failed to close request body: %s", tcpID.Ident, err)
|
||||
}
|
||||
encoding := req.Header["Content-Encoding"]
|
||||
rlog.Tracef(1, "HTTP/1 Request: %s %s %s (Body:%d) -> %s", tcpID.Ident, req.Method, req.URL, s, encoding)
|
||||
|
||||
ident := fmt.Sprintf(
|
||||
"%s->%s %s->%s %d",
|
||||
tcpID.SrcIP,
|
||||
tcpID.DstIP,
|
||||
tcpID.SrcPort,
|
||||
tcpID.DstPort,
|
||||
counterPair.Request,
|
||||
)
|
||||
item := reqResMatcher.registerRequest(ident, req, superTimer.CaptureTime)
|
||||
if item != nil {
|
||||
item.ConnectionInfo = &api.ConnectionInfo{
|
||||
ClientIP: tcpID.SrcIP,
|
||||
ClientPort: tcpID.SrcPort,
|
||||
ServerIP: tcpID.DstIP,
|
||||
ServerPort: tcpID.DstPort,
|
||||
IsOutgoing: true,
|
||||
}
|
||||
emitter.Emit(item)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) error {
|
||||
res, err := http.ReadResponse(b, nil)
|
||||
if err != nil {
|
||||
// log.Println("Error reading stream:", err)
|
||||
return err
|
||||
}
|
||||
counterPair.Response++
|
||||
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
res.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
|
||||
s := len(body)
|
||||
if err != nil {
|
||||
rlog.Debugf("[HTTP-response-body] HTTP/%s: failed to get body(parsed len:%d): %s", tcpID.Ident, s, err)
|
||||
}
|
||||
if err := res.Body.Close(); err != nil {
|
||||
rlog.Debugf("[HTTP-response-body-close] HTTP/%s: failed to close body(parsed len:%d): %s", tcpID.Ident, s, err)
|
||||
}
|
||||
sym := ","
|
||||
if res.ContentLength > 0 && res.ContentLength != int64(s) {
|
||||
sym = "!="
|
||||
}
|
||||
contentType, ok := res.Header["Content-Type"]
|
||||
if !ok {
|
||||
contentType = []string{http.DetectContentType(body)}
|
||||
}
|
||||
encoding := res.Header["Content-Encoding"]
|
||||
rlog.Tracef(1, "HTTP/1 Response: %s %s (%d%s%d%s) -> %s", tcpID.Ident, res.Status, res.ContentLength, sym, s, contentType, encoding)
|
||||
|
||||
ident := fmt.Sprintf(
|
||||
"%s->%s %s->%s %d",
|
||||
tcpID.DstIP,
|
||||
tcpID.SrcIP,
|
||||
tcpID.DstPort,
|
||||
tcpID.SrcPort,
|
||||
counterPair.Response,
|
||||
)
|
||||
item := reqResMatcher.registerResponse(ident, res, superTimer.CaptureTime)
|
||||
if item != nil {
|
||||
item.ConnectionInfo = &api.ConnectionInfo{
|
||||
ClientIP: tcpID.DstIP,
|
||||
ClientPort: tcpID.DstPort,
|
||||
ServerIP: tcpID.SrcIP,
|
||||
ServerPort: tcpID.SrcPort,
|
||||
IsOutgoing: false,
|
||||
}
|
||||
emitter.Emit(item)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
400
tap/extensions/http/main.go
Normal file
400
tap/extensions/http/main.go
Normal file
@@ -0,0 +1,400 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/url"
|
||||
"time"
|
||||
|
||||
"github.com/romana/rlog"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
var protocol api.Protocol = api.Protocol{
|
||||
Name: "http",
|
||||
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
|
||||
Abbreviation: "HTTP",
|
||||
Version: "1.1",
|
||||
BackgroundColor: "#205cf5",
|
||||
ForegroundColor: "#ffffff",
|
||||
FontSize: 12,
|
||||
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc2616",
|
||||
Ports: []string{"80", "8080", "50051"},
|
||||
Priority: 0,
|
||||
}
|
||||
|
||||
var http2Protocol api.Protocol = api.Protocol{
|
||||
Name: "http",
|
||||
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) (gRPC)",
|
||||
Abbreviation: "HTTP/2",
|
||||
Version: "2.0",
|
||||
BackgroundColor: "#244c5a",
|
||||
ForegroundColor: "#ffffff",
|
||||
FontSize: 11,
|
||||
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc7540",
|
||||
Ports: []string{"80", "8080"},
|
||||
Priority: 0,
|
||||
}
|
||||
|
||||
const (
|
||||
TypeHttpRequest = iota
|
||||
TypeHttpResponse
|
||||
)
|
||||
|
||||
func init() {
|
||||
log.Println("Initializing HTTP extension...")
|
||||
}
|
||||
|
||||
type dissecting string
|
||||
|
||||
func (d dissecting) Register(extension *api.Extension) {
|
||||
extension.Protocol = &protocol
|
||||
extension.MatcherMap = reqResMatcher.openMessagesMap
|
||||
}
|
||||
|
||||
func (d dissecting) Ping() {
|
||||
log.Printf("pong %s\n", protocol.Name)
|
||||
}
|
||||
|
||||
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter) error {
|
||||
ident := fmt.Sprintf("%s->%s:%s->%s", tcpID.SrcIP, tcpID.DstIP, tcpID.SrcPort, tcpID.DstPort)
|
||||
isHTTP2, err := checkIsHTTP2Connection(b, isClient)
|
||||
if err != nil {
|
||||
rlog.Debugf("[HTTP/2-Prepare-Connection] stream %s Failed to check if client is HTTP/2: %s (%v,%+v)", ident, err, err, err)
|
||||
// Do something?
|
||||
}
|
||||
|
||||
var grpcAssembler *GrpcAssembler
|
||||
if isHTTP2 {
|
||||
err := prepareHTTP2Connection(b, isClient)
|
||||
if err != nil {
|
||||
rlog.Debugf("[HTTP/2-Prepare-Connection-After-Check] stream %s error: %s (%v,%+v)", ident, err, err, err)
|
||||
}
|
||||
grpcAssembler = createGrpcAssembler(b)
|
||||
}
|
||||
|
||||
dissected := false
|
||||
for {
|
||||
if superIdentifier.Protocol != nil && superIdentifier.Protocol != &protocol {
|
||||
return errors.New("Identified by another protocol")
|
||||
}
|
||||
|
||||
if isHTTP2 {
|
||||
err = handleHTTP2Stream(grpcAssembler, tcpID, superTimer, emitter)
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
rlog.Debugf("[HTTP/2] stream %s error: %s (%v,%+v)", ident, err, err, err)
|
||||
continue
|
||||
}
|
||||
dissected = true
|
||||
} else if isClient {
|
||||
err = handleHTTP1ClientStream(b, tcpID, counterPair, superTimer, emitter)
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
rlog.Debugf("[HTTP-request] stream %s Request error: %s (%v,%+v)", ident, err, err, err)
|
||||
continue
|
||||
}
|
||||
dissected = true
|
||||
} else {
|
||||
err = handleHTTP1ServerStream(b, tcpID, counterPair, superTimer, emitter)
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
rlog.Debugf("[HTTP-response], stream %s Response error: %s (%v,%+v)", ident, err, err, err)
|
||||
continue
|
||||
}
|
||||
dissected = true
|
||||
}
|
||||
}
|
||||
|
||||
if !dissected {
|
||||
return err
|
||||
}
|
||||
superIdentifier.Protocol = &protocol
|
||||
return nil
|
||||
}
|
||||
|
||||
func SetHostname(address, newHostname string) string {
|
||||
replacedUrl, err := url.Parse(address)
|
||||
if err != nil {
|
||||
log.Printf("error replacing hostname to %s in address %s, returning original %v", newHostname, address, err)
|
||||
return address
|
||||
}
|
||||
replacedUrl.Host = newHostname
|
||||
return replacedUrl.String()
|
||||
}
|
||||
|
||||
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
|
||||
var host, scheme, authority, path, service string
|
||||
|
||||
request := item.Pair.Request.Payload.(map[string]interface{})
|
||||
response := item.Pair.Response.Payload.(map[string]interface{})
|
||||
reqDetails := request["details"].(map[string]interface{})
|
||||
resDetails := response["details"].(map[string]interface{})
|
||||
|
||||
for _, header := range reqDetails["headers"].([]interface{}) {
|
||||
h := header.(map[string]interface{})
|
||||
if h["name"] == "Host" {
|
||||
host = h["value"].(string)
|
||||
}
|
||||
if h["name"] == ":authority" {
|
||||
authority = h["value"].(string)
|
||||
}
|
||||
if h["name"] == ":scheme" {
|
||||
scheme = h["value"].(string)
|
||||
}
|
||||
if h["name"] == ":path" {
|
||||
path = h["value"].(string)
|
||||
}
|
||||
}
|
||||
|
||||
if item.Protocol.Version == "2.0" {
|
||||
service = fmt.Sprintf("%s://%s", scheme, authority)
|
||||
} else {
|
||||
service = fmt.Sprintf("http://%s", host)
|
||||
path = reqDetails["url"].(string)
|
||||
}
|
||||
|
||||
request["url"] = path
|
||||
if resolvedDestination != "" {
|
||||
service = SetHostname(service, resolvedDestination)
|
||||
} else if resolvedSource != "" {
|
||||
service = SetHostname(service, resolvedSource)
|
||||
}
|
||||
|
||||
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
|
||||
entryBytes, _ := json.Marshal(item.Pair)
|
||||
return &api.MizuEntry{
|
||||
ProtocolName: protocol.Name,
|
||||
ProtocolLongName: protocol.LongName,
|
||||
ProtocolAbbreviation: protocol.Abbreviation,
|
||||
ProtocolVersion: item.Protocol.Version,
|
||||
ProtocolBackgroundColor: protocol.BackgroundColor,
|
||||
ProtocolForegroundColor: protocol.ForegroundColor,
|
||||
ProtocolFontSize: protocol.FontSize,
|
||||
ProtocolReferenceLink: protocol.ReferenceLink,
|
||||
EntryId: entryId,
|
||||
Entry: string(entryBytes),
|
||||
Url: fmt.Sprintf("%s%s", service, path),
|
||||
Method: reqDetails["method"].(string),
|
||||
Status: int(resDetails["status"].(float64)),
|
||||
RequestSenderIp: item.ConnectionInfo.ClientIP,
|
||||
Service: service,
|
||||
Timestamp: item.Timestamp,
|
||||
ElapsedTime: elapsedTime,
|
||||
Path: path,
|
||||
ResolvedSource: resolvedSource,
|
||||
ResolvedDestination: resolvedDestination,
|
||||
SourceIp: item.ConnectionInfo.ClientIP,
|
||||
DestinationIp: item.ConnectionInfo.ServerIP,
|
||||
SourcePort: item.ConnectionInfo.ClientPort,
|
||||
DestinationPort: item.ConnectionInfo.ServerPort,
|
||||
IsOutgoing: item.ConnectionInfo.IsOutgoing,
|
||||
}
|
||||
}
|
||||
|
||||
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
|
||||
var p api.Protocol
|
||||
if entry.ProtocolVersion == "2.0" {
|
||||
p = http2Protocol
|
||||
} else {
|
||||
p = protocol
|
||||
}
|
||||
return &api.BaseEntryDetails{
|
||||
Id: entry.EntryId,
|
||||
Protocol: p,
|
||||
Url: entry.Url,
|
||||
RequestSenderIp: entry.RequestSenderIp,
|
||||
Service: entry.Service,
|
||||
Path: entry.Path,
|
||||
Summary: entry.Path,
|
||||
StatusCode: entry.Status,
|
||||
Method: entry.Method,
|
||||
Timestamp: entry.Timestamp,
|
||||
SourceIp: entry.SourceIp,
|
||||
DestinationIp: entry.DestinationIp,
|
||||
SourcePort: entry.SourcePort,
|
||||
DestinationPort: entry.DestinationPort,
|
||||
IsOutgoing: entry.IsOutgoing,
|
||||
Latency: 0,
|
||||
Rules: api.ApplicableRules{
|
||||
Latency: 0,
|
||||
Status: false,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func representRequest(request map[string]interface{}) (repRequest []interface{}) {
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Method",
|
||||
"value": request["method"].(string),
|
||||
},
|
||||
{
|
||||
"name": "URL",
|
||||
"value": request["url"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Body Size",
|
||||
"value": fmt.Sprintf("%g bytes", request["bodySize"].(float64)),
|
||||
},
|
||||
})
|
||||
repRequest = append(repRequest, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
headers, _ := json.Marshal(request["headers"].([]interface{}))
|
||||
repRequest = append(repRequest, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Headers",
|
||||
"data": string(headers),
|
||||
})
|
||||
|
||||
cookies, _ := json.Marshal(request["cookies"].([]interface{}))
|
||||
repRequest = append(repRequest, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Cookies",
|
||||
"data": string(cookies),
|
||||
})
|
||||
|
||||
queryString, _ := json.Marshal(request["queryString"].([]interface{}))
|
||||
repRequest = append(repRequest, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Query String",
|
||||
"data": string(queryString),
|
||||
})
|
||||
|
||||
postData, _ := request["postData"].(map[string]interface{})
|
||||
mimeType, _ := postData["mimeType"]
|
||||
if mimeType == nil || len(mimeType.(string)) == 0 {
|
||||
mimeType = "text/html"
|
||||
}
|
||||
text, _ := postData["text"]
|
||||
if text != nil {
|
||||
repRequest = append(repRequest, map[string]string{
|
||||
"type": api.BODY,
|
||||
"title": "POST Data (text/plain)",
|
||||
"encoding": "",
|
||||
"mime_type": mimeType.(string),
|
||||
"data": text.(string),
|
||||
})
|
||||
}
|
||||
|
||||
if postData["params"] != nil {
|
||||
params, _ := json.Marshal(postData["params"].([]interface{}))
|
||||
if len(params) > 0 {
|
||||
if mimeType == "multipart/form-data" {
|
||||
multipart, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Files",
|
||||
"value": string(params),
|
||||
},
|
||||
})
|
||||
repRequest = append(repRequest, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "POST Data (multipart/form-data)",
|
||||
"data": string(multipart),
|
||||
})
|
||||
} else {
|
||||
repRequest = append(repRequest, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "POST Data (application/x-www-form-urlencoded)",
|
||||
"data": string(params),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func representResponse(response map[string]interface{}) (repResponse []interface{}, bodySize int64) {
|
||||
repResponse = make([]interface{}, 0)
|
||||
|
||||
bodySize = int64(response["bodySize"].(float64))
|
||||
|
||||
details, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Status",
|
||||
"value": fmt.Sprintf("%g", response["status"].(float64)),
|
||||
},
|
||||
{
|
||||
"name": "Status Text",
|
||||
"value": response["statusText"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Body Size",
|
||||
"value": fmt.Sprintf("%d bytes", bodySize),
|
||||
},
|
||||
})
|
||||
repResponse = append(repResponse, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Details",
|
||||
"data": string(details),
|
||||
})
|
||||
|
||||
headers, _ := json.Marshal(response["headers"].([]interface{}))
|
||||
repResponse = append(repResponse, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Headers",
|
||||
"data": string(headers),
|
||||
})
|
||||
|
||||
cookies, _ := json.Marshal(response["cookies"].([]interface{}))
|
||||
repResponse = append(repResponse, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Cookies",
|
||||
"data": string(cookies),
|
||||
})
|
||||
|
||||
content, _ := response["content"].(map[string]interface{})
|
||||
mimeType, _ := content["mimeType"]
|
||||
if mimeType == nil || len(mimeType.(string)) == 0 {
|
||||
mimeType = "text/html"
|
||||
}
|
||||
encoding, _ := content["encoding"]
|
||||
text, _ := content["text"]
|
||||
if text != nil {
|
||||
repResponse = append(repResponse, map[string]string{
|
||||
"type": api.BODY,
|
||||
"title": "Body",
|
||||
"encoding": encoding.(string),
|
||||
"mime_type": mimeType.(string),
|
||||
"data": text.(string),
|
||||
})
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
|
||||
if entry.ProtocolVersion == "2.0" {
|
||||
p = http2Protocol
|
||||
} else {
|
||||
p = protocol
|
||||
}
|
||||
var root map[string]interface{}
|
||||
json.Unmarshal([]byte(entry.Entry), &root)
|
||||
representation := make(map[string]interface{}, 0)
|
||||
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
|
||||
response := root["response"].(map[string]interface{})["payload"].(map[string]interface{})
|
||||
reqDetails := request["details"].(map[string]interface{})
|
||||
resDetails := response["details"].(map[string]interface{})
|
||||
repRequest := representRequest(reqDetails)
|
||||
repResponse, bodySize := representResponse(resDetails)
|
||||
representation["request"] = repRequest
|
||||
representation["response"] = repResponse
|
||||
object, err = json.Marshal(representation)
|
||||
return
|
||||
}
|
||||
|
||||
var Dissector dissecting
|
||||
105
tap/extensions/http/matcher.go
Normal file
105
tap/extensions/http/matcher.go
Normal file
@@ -0,0 +1,105 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/romana/rlog"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
var reqResMatcher = createResponseRequestMatcher() // global
|
||||
|
||||
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}_{incremental_counter}
|
||||
type requestResponseMatcher struct {
|
||||
openMessagesMap *sync.Map
|
||||
}
|
||||
|
||||
func createResponseRequestMatcher() requestResponseMatcher {
|
||||
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
|
||||
return *newMatcher
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) registerRequest(ident string, request *http.Request, captureTime time.Time) *api.OutputChannelItem {
|
||||
split := splitIdent(ident)
|
||||
key := genKey(split)
|
||||
|
||||
requestHTTPMessage := api.GenericMessage{
|
||||
IsRequest: true,
|
||||
CaptureTime: captureTime,
|
||||
Payload: HTTPPayload{
|
||||
Type: TypeHttpRequest,
|
||||
Data: request,
|
||||
},
|
||||
}
|
||||
|
||||
if response, found := matcher.openMessagesMap.LoadAndDelete(key); found {
|
||||
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
|
||||
responseHTTPMessage := response.(*api.GenericMessage)
|
||||
if responseHTTPMessage.IsRequest {
|
||||
rlog.Debugf("[Request-Duplicate] Got duplicate request with same identifier")
|
||||
return nil
|
||||
}
|
||||
rlog.Tracef(1, "Matched open Response for %s", key)
|
||||
return matcher.preparePair(&requestHTTPMessage, responseHTTPMessage)
|
||||
}
|
||||
|
||||
matcher.openMessagesMap.Store(key, &requestHTTPMessage)
|
||||
rlog.Tracef(1, "Registered open Request for %s", key)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) registerResponse(ident string, response *http.Response, captureTime time.Time) *api.OutputChannelItem {
|
||||
split := splitIdent(ident)
|
||||
key := genKey(split)
|
||||
|
||||
responseHTTPMessage := api.GenericMessage{
|
||||
IsRequest: false,
|
||||
CaptureTime: captureTime,
|
||||
Payload: HTTPPayload{
|
||||
Type: TypeHttpResponse,
|
||||
Data: response,
|
||||
},
|
||||
}
|
||||
|
||||
if request, found := matcher.openMessagesMap.LoadAndDelete(key); found {
|
||||
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
|
||||
requestHTTPMessage := request.(*api.GenericMessage)
|
||||
if !requestHTTPMessage.IsRequest {
|
||||
rlog.Debugf("[Response-Duplicate] Got duplicate response with same identifier")
|
||||
return nil
|
||||
}
|
||||
rlog.Tracef(1, "Matched open Request for %s", key)
|
||||
return matcher.preparePair(requestHTTPMessage, &responseHTTPMessage)
|
||||
}
|
||||
|
||||
matcher.openMessagesMap.Store(key, &responseHTTPMessage)
|
||||
rlog.Tracef(1, "Registered open Response for %s", key)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) preparePair(requestHTTPMessage *api.GenericMessage, responseHTTPMessage *api.GenericMessage) *api.OutputChannelItem {
|
||||
return &api.OutputChannelItem{
|
||||
Protocol: protocol,
|
||||
Timestamp: requestHTTPMessage.CaptureTime.UnixNano() / int64(time.Millisecond),
|
||||
ConnectionInfo: nil,
|
||||
Pair: &api.RequestResponsePair{
|
||||
Request: *requestHTTPMessage,
|
||||
Response: *responseHTTPMessage,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func splitIdent(ident string) []string {
|
||||
ident = strings.Replace(ident, "->", " ", -1)
|
||||
return strings.Split(ident, " ")
|
||||
}
|
||||
|
||||
func genKey(split []string) string {
|
||||
key := fmt.Sprintf("%s:%s->%s:%s,%s", split[0], split[2], split[1], split[3], split[4])
|
||||
return key
|
||||
}
|
||||
55
tap/extensions/http/structs.go
Normal file
55
tap/extensions/http/structs.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/google/martian/har"
|
||||
"github.com/romana/rlog"
|
||||
)
|
||||
|
||||
type HTTPPayload struct {
|
||||
Type uint8
|
||||
Data interface{}
|
||||
}
|
||||
|
||||
type HTTPPayloader interface {
|
||||
MarshalJSON() ([]byte, error)
|
||||
}
|
||||
|
||||
type HTTPWrapper struct {
|
||||
Method string `json:"method"`
|
||||
Url string `json:"url"`
|
||||
Details interface{} `json:"details"`
|
||||
}
|
||||
|
||||
func (h HTTPPayload) MarshalJSON() ([]byte, error) {
|
||||
switch h.Type {
|
||||
case TypeHttpRequest:
|
||||
harRequest, err := har.NewRequest(h.Data.(*http.Request), true)
|
||||
if err != nil {
|
||||
rlog.Debugf("convert-request-to-har", "Failed converting request to HAR %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("Failed converting request to HAR")
|
||||
}
|
||||
return json.Marshal(&HTTPWrapper{
|
||||
Method: harRequest.Method,
|
||||
Url: "",
|
||||
Details: harRequest,
|
||||
})
|
||||
case TypeHttpResponse:
|
||||
harResponse, err := har.NewResponse(h.Data.(*http.Response), true)
|
||||
if err != nil {
|
||||
rlog.Debugf("convert-response-to-har", "Failed converting response to HAR %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("Failed converting response to HAR")
|
||||
}
|
||||
return json.Marshal(&HTTPWrapper{
|
||||
Method: "",
|
||||
Url: "",
|
||||
Details: harResponse,
|
||||
})
|
||||
default:
|
||||
panic(fmt.Sprintf("HTTP payload cannot be marshaled: %s\n", h.Type))
|
||||
}
|
||||
}
|
||||
645
tap/extensions/kafka/buffer.go
Normal file
645
tap/extensions/kafka/buffer.go
Normal file
@@ -0,0 +1,645 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// Bytes is an interface implemented by types that represent immutable
|
||||
// sequences of bytes.
|
||||
//
|
||||
// Bytes values are used to abstract the location where record keys and
|
||||
// values are read from (e.g. in-memory buffers, network sockets, files).
|
||||
//
|
||||
// The Close method should be called to release resources held by the object
|
||||
// when the program is done with it.
|
||||
//
|
||||
// Bytes values are generally not safe to use concurrently from multiple
|
||||
// goroutines.
|
||||
type Bytes interface {
|
||||
io.ReadCloser
|
||||
// Returns the number of bytes remaining to be read from the payload.
|
||||
Len() int
|
||||
}
|
||||
|
||||
// NewBytes constructs a Bytes value from b.
|
||||
//
|
||||
// The returned value references b, it does not make a copy of the backing
|
||||
// array.
|
||||
//
|
||||
// If b is nil, nil is returned to represent a null BYTES value in the kafka
|
||||
// protocol.
|
||||
func NewBytes(b []byte) Bytes {
|
||||
if b == nil {
|
||||
return nil
|
||||
}
|
||||
r := new(bytesReader)
|
||||
r.Reset(b)
|
||||
return r
|
||||
}
|
||||
|
||||
// ReadAll is similar to ioutil.ReadAll, but it takes advantage of knowing the
|
||||
// length of b to minimize the memory footprint.
|
||||
//
|
||||
// The function returns a nil slice if b is nil.
|
||||
// func ReadAll(b Bytes) ([]byte, error) {
|
||||
// if b == nil {
|
||||
// return nil, nil
|
||||
// }
|
||||
// s := make([]byte, b.Len())
|
||||
// _, err := io.ReadFull(b, s)
|
||||
// return s, err
|
||||
// }
|
||||
|
||||
type bytesReader struct{ bytes.Reader }
|
||||
|
||||
func (*bytesReader) Close() error { return nil }
|
||||
|
||||
type refCount uintptr
|
||||
|
||||
func (rc *refCount) ref() { atomic.AddUintptr((*uintptr)(rc), 1) }
|
||||
|
||||
func (rc *refCount) unref(onZero func()) {
|
||||
if atomic.AddUintptr((*uintptr)(rc), ^uintptr(0)) == 0 {
|
||||
onZero()
|
||||
}
|
||||
}
|
||||
|
||||
const (
|
||||
// Size of the memory buffer for a single page. We use a farily
|
||||
// large size here (64 KiB) because batches exchanged with kafka
|
||||
// tend to be multiple kilobytes in size, sometimes hundreds.
|
||||
// Using large pages amortizes the overhead of the page metadata
|
||||
// and algorithms to manage the pages.
|
||||
pageSize = 65536
|
||||
)
|
||||
|
||||
type page struct {
|
||||
refc refCount
|
||||
offset int64
|
||||
length int
|
||||
buffer *[pageSize]byte
|
||||
}
|
||||
|
||||
func newPage(offset int64) *page {
|
||||
p, _ := pagePool.Get().(*page)
|
||||
if p != nil {
|
||||
p.offset = offset
|
||||
p.length = 0
|
||||
p.ref()
|
||||
} else {
|
||||
p = &page{
|
||||
refc: 1,
|
||||
offset: offset,
|
||||
buffer: &[pageSize]byte{},
|
||||
}
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
func (p *page) ref() { p.refc.ref() }
|
||||
|
||||
func (p *page) unref() { p.refc.unref(func() { pagePool.Put(p) }) }
|
||||
|
||||
func (p *page) slice(begin, end int64) []byte {
|
||||
i, j := begin-p.offset, end-p.offset
|
||||
|
||||
if i < 0 {
|
||||
i = 0
|
||||
} else if i > pageSize {
|
||||
i = pageSize
|
||||
}
|
||||
|
||||
if j < 0 {
|
||||
j = 0
|
||||
} else if j > pageSize {
|
||||
j = pageSize
|
||||
}
|
||||
|
||||
if i < j {
|
||||
return p.buffer[i:j]
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *page) Cap() int { return pageSize }
|
||||
|
||||
func (p *page) Len() int { return p.length }
|
||||
|
||||
func (p *page) Size() int64 { return int64(p.length) }
|
||||
|
||||
func (p *page) Truncate(n int) {
|
||||
if n < p.length {
|
||||
p.length = n
|
||||
}
|
||||
}
|
||||
|
||||
func (p *page) ReadAt(b []byte, off int64) (int, error) {
|
||||
if off -= p.offset; off < 0 || off > pageSize {
|
||||
panic("offset out of range")
|
||||
}
|
||||
if off > int64(p.length) {
|
||||
return 0, nil
|
||||
}
|
||||
return copy(b, p.buffer[off:p.length]), nil
|
||||
}
|
||||
|
||||
func (p *page) ReadFrom(r io.Reader) (int64, error) {
|
||||
n, err := io.ReadFull(r, p.buffer[p.length:])
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
err = nil
|
||||
}
|
||||
p.length += n
|
||||
return int64(n), err
|
||||
}
|
||||
|
||||
func (p *page) WriteAt(b []byte, off int64) (int, error) {
|
||||
if off -= p.offset; off < 0 || off > pageSize {
|
||||
panic("offset out of range")
|
||||
}
|
||||
n := copy(p.buffer[off:], b)
|
||||
if end := int(off) + n; end > p.length {
|
||||
p.length = end
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (p *page) Write(b []byte) (int, error) {
|
||||
return p.WriteAt(b, p.offset+int64(p.length))
|
||||
}
|
||||
|
||||
var (
|
||||
_ io.ReaderAt = (*page)(nil)
|
||||
_ io.ReaderFrom = (*page)(nil)
|
||||
_ io.Writer = (*page)(nil)
|
||||
_ io.WriterAt = (*page)(nil)
|
||||
)
|
||||
|
||||
type pageBuffer struct {
|
||||
refc refCount
|
||||
pages contiguousPages
|
||||
length int
|
||||
cursor int
|
||||
}
|
||||
|
||||
func newPageBuffer() *pageBuffer {
|
||||
b, _ := pageBufferPool.Get().(*pageBuffer)
|
||||
if b != nil {
|
||||
b.cursor = 0
|
||||
b.refc.ref()
|
||||
} else {
|
||||
b = &pageBuffer{
|
||||
refc: 1,
|
||||
pages: make(contiguousPages, 0, 16),
|
||||
}
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) refTo(ref *pageRef, begin, end int64) {
|
||||
length := end - begin
|
||||
|
||||
if length > math.MaxUint32 {
|
||||
panic("reference to contiguous buffer pages exceeds the maximum size of 4 GB")
|
||||
}
|
||||
|
||||
ref.pages = append(ref.buffer[:0], pb.pages.slice(begin, end)...)
|
||||
ref.pages.ref()
|
||||
ref.offset = begin
|
||||
ref.length = uint32(length)
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) ref(begin, end int64) *pageRef {
|
||||
ref := new(pageRef)
|
||||
pb.refTo(ref, begin, end)
|
||||
return ref
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) unref() {
|
||||
pb.refc.unref(func() {
|
||||
pb.pages.unref()
|
||||
pb.pages.clear()
|
||||
pb.pages = pb.pages[:0]
|
||||
pb.length = 0
|
||||
pageBufferPool.Put(pb)
|
||||
})
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) newPage() *page {
|
||||
return newPage(int64(pb.length))
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Len() int {
|
||||
return pb.length - pb.cursor
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Size() int64 {
|
||||
return int64(pb.length)
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Discard(n int) (int, error) {
|
||||
remain := pb.length - pb.cursor
|
||||
if remain < n {
|
||||
n = remain
|
||||
}
|
||||
pb.cursor += n
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Truncate(n int) {
|
||||
if n < pb.length {
|
||||
pb.length = n
|
||||
|
||||
if n < pb.cursor {
|
||||
pb.cursor = n
|
||||
}
|
||||
|
||||
for i := range pb.pages {
|
||||
if p := pb.pages[i]; p.length <= n {
|
||||
n -= p.length
|
||||
} else {
|
||||
if n > 0 {
|
||||
pb.pages[i].Truncate(n)
|
||||
i++
|
||||
}
|
||||
pb.pages[i:].unref()
|
||||
pb.pages[i:].clear()
|
||||
pb.pages = pb.pages[:i]
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Seek(offset int64, whence int) (int64, error) {
|
||||
c, err := seek(int64(pb.cursor), int64(pb.length), offset, whence)
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
pb.cursor = int(c)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) ReadByte() (byte, error) {
|
||||
b := [1]byte{}
|
||||
_, err := pb.Read(b[:])
|
||||
return b[0], err
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Read(b []byte) (int, error) {
|
||||
if pb.cursor >= pb.length {
|
||||
return 0, io.EOF
|
||||
}
|
||||
n, err := pb.ReadAt(b, int64(pb.cursor))
|
||||
pb.cursor += n
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) ReadAt(b []byte, off int64) (int, error) {
|
||||
return pb.pages.ReadAt(b, off)
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) ReadFrom(r io.Reader) (int64, error) {
|
||||
if len(pb.pages) == 0 {
|
||||
pb.pages = append(pb.pages, pb.newPage())
|
||||
}
|
||||
|
||||
rn := int64(0)
|
||||
|
||||
for {
|
||||
tail := pb.pages[len(pb.pages)-1]
|
||||
free := tail.Cap() - tail.Len()
|
||||
|
||||
if free == 0 {
|
||||
tail = pb.newPage()
|
||||
free = pageSize
|
||||
pb.pages = append(pb.pages, tail)
|
||||
}
|
||||
|
||||
n, err := tail.ReadFrom(r)
|
||||
pb.length += int(n)
|
||||
rn += n
|
||||
if n < int64(free) {
|
||||
return rn, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) WriteString(s string) (int, error) {
|
||||
return pb.Write([]byte(s))
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) Write(b []byte) (int, error) {
|
||||
wn := len(b)
|
||||
if wn == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
if len(pb.pages) == 0 {
|
||||
pb.pages = append(pb.pages, pb.newPage())
|
||||
}
|
||||
|
||||
for len(b) != 0 {
|
||||
tail := pb.pages[len(pb.pages)-1]
|
||||
free := tail.Cap() - tail.Len()
|
||||
|
||||
if len(b) <= free {
|
||||
tail.Write(b)
|
||||
pb.length += len(b)
|
||||
break
|
||||
}
|
||||
|
||||
tail.Write(b[:free])
|
||||
b = b[free:]
|
||||
|
||||
pb.length += free
|
||||
pb.pages = append(pb.pages, pb.newPage())
|
||||
}
|
||||
|
||||
return wn, nil
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) WriteAt(b []byte, off int64) (int, error) {
|
||||
n, err := pb.pages.WriteAt(b, off)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
if n < len(b) {
|
||||
pb.Write(b[n:])
|
||||
}
|
||||
return len(b), nil
|
||||
}
|
||||
|
||||
func (pb *pageBuffer) WriteTo(w io.Writer) (int64, error) {
|
||||
var wn int
|
||||
var err error
|
||||
pb.pages.scan(int64(pb.cursor), int64(pb.length), func(b []byte) bool {
|
||||
var n int
|
||||
n, err = w.Write(b)
|
||||
wn += n
|
||||
return err == nil
|
||||
})
|
||||
pb.cursor += wn
|
||||
return int64(wn), err
|
||||
}
|
||||
|
||||
var (
|
||||
_ io.ReaderAt = (*pageBuffer)(nil)
|
||||
_ io.ReaderFrom = (*pageBuffer)(nil)
|
||||
_ io.StringWriter = (*pageBuffer)(nil)
|
||||
_ io.Writer = (*pageBuffer)(nil)
|
||||
_ io.WriterAt = (*pageBuffer)(nil)
|
||||
_ io.WriterTo = (*pageBuffer)(nil)
|
||||
|
||||
pagePool sync.Pool
|
||||
pageBufferPool sync.Pool
|
||||
)
|
||||
|
||||
type contiguousPages []*page
|
||||
|
||||
func (pages contiguousPages) ref() {
|
||||
for _, p := range pages {
|
||||
p.ref()
|
||||
}
|
||||
}
|
||||
|
||||
func (pages contiguousPages) unref() {
|
||||
for _, p := range pages {
|
||||
p.unref()
|
||||
}
|
||||
}
|
||||
|
||||
func (pages contiguousPages) clear() {
|
||||
for i := range pages {
|
||||
pages[i] = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (pages contiguousPages) ReadAt(b []byte, off int64) (int, error) {
|
||||
rn := 0
|
||||
|
||||
for _, p := range pages.slice(off, off+int64(len(b))) {
|
||||
n, _ := p.ReadAt(b, off)
|
||||
b = b[n:]
|
||||
rn += n
|
||||
off += int64(n)
|
||||
}
|
||||
|
||||
return rn, nil
|
||||
}
|
||||
|
||||
func (pages contiguousPages) WriteAt(b []byte, off int64) (int, error) {
|
||||
wn := 0
|
||||
|
||||
for _, p := range pages.slice(off, off+int64(len(b))) {
|
||||
n, _ := p.WriteAt(b, off)
|
||||
b = b[n:]
|
||||
wn += n
|
||||
off += int64(n)
|
||||
}
|
||||
|
||||
return wn, nil
|
||||
}
|
||||
|
||||
func (pages contiguousPages) slice(begin, end int64) contiguousPages {
|
||||
i := pages.indexOf(begin)
|
||||
j := pages.indexOf(end)
|
||||
if j < len(pages) {
|
||||
j++
|
||||
}
|
||||
return pages[i:j]
|
||||
}
|
||||
|
||||
func (pages contiguousPages) indexOf(offset int64) int {
|
||||
if len(pages) == 0 {
|
||||
return 0
|
||||
}
|
||||
return int((offset - pages[0].offset) / pageSize)
|
||||
}
|
||||
|
||||
func (pages contiguousPages) scan(begin, end int64, f func([]byte) bool) {
|
||||
for _, p := range pages.slice(begin, end) {
|
||||
if !f(p.slice(begin, end)) {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
_ io.ReaderAt = contiguousPages{}
|
||||
_ io.WriterAt = contiguousPages{}
|
||||
)
|
||||
|
||||
type pageRef struct {
|
||||
buffer [2]*page
|
||||
pages contiguousPages
|
||||
offset int64
|
||||
cursor int64
|
||||
length uint32
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (ref *pageRef) unref() {
|
||||
if atomic.CompareAndSwapUint32(&ref.once, 0, 1) {
|
||||
ref.pages.unref()
|
||||
ref.pages.clear()
|
||||
ref.pages = nil
|
||||
ref.offset = 0
|
||||
ref.cursor = 0
|
||||
ref.length = 0
|
||||
}
|
||||
}
|
||||
|
||||
func (ref *pageRef) Len() int { return int(ref.Size() - ref.cursor) }
|
||||
|
||||
func (ref *pageRef) Size() int64 { return int64(ref.length) }
|
||||
|
||||
func (ref *pageRef) Close() error { ref.unref(); return nil }
|
||||
|
||||
func (ref *pageRef) String() string {
|
||||
return fmt.Sprintf("[offset=%d cursor=%d length=%d]", ref.offset, ref.cursor, ref.length)
|
||||
}
|
||||
|
||||
func (ref *pageRef) Seek(offset int64, whence int) (int64, error) {
|
||||
c, err := seek(ref.cursor, int64(ref.length), offset, whence)
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
ref.cursor = c
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (ref *pageRef) ReadByte() (byte, error) {
|
||||
var c byte
|
||||
var ok bool
|
||||
ref.scan(ref.cursor, func(b []byte) bool {
|
||||
c, ok = b[0], true
|
||||
return false
|
||||
})
|
||||
if ok {
|
||||
ref.cursor++
|
||||
} else {
|
||||
return 0, io.EOF
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (ref *pageRef) Read(b []byte) (int, error) {
|
||||
if ref.cursor >= int64(ref.length) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
n, err := ref.ReadAt(b, ref.cursor)
|
||||
ref.cursor += int64(n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (ref *pageRef) ReadAt(b []byte, off int64) (int, error) {
|
||||
limit := ref.offset + int64(ref.length)
|
||||
off += ref.offset
|
||||
|
||||
if off >= limit {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
if off+int64(len(b)) > limit {
|
||||
b = b[:limit-off]
|
||||
}
|
||||
|
||||
if len(b) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
n, err := ref.pages.ReadAt(b, off)
|
||||
if n == 0 && err == nil {
|
||||
err = io.EOF
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (ref *pageRef) WriteTo(w io.Writer) (wn int64, err error) {
|
||||
ref.scan(ref.cursor, func(b []byte) bool {
|
||||
var n int
|
||||
n, err = w.Write(b)
|
||||
wn += int64(n)
|
||||
return err == nil
|
||||
})
|
||||
ref.cursor += wn
|
||||
return
|
||||
}
|
||||
|
||||
func (ref *pageRef) scan(off int64, f func([]byte) bool) {
|
||||
begin := ref.offset + off
|
||||
end := ref.offset + int64(ref.length)
|
||||
ref.pages.scan(begin, end, f)
|
||||
}
|
||||
|
||||
var (
|
||||
_ io.Closer = (*pageRef)(nil)
|
||||
_ io.Seeker = (*pageRef)(nil)
|
||||
_ io.Reader = (*pageRef)(nil)
|
||||
_ io.ReaderAt = (*pageRef)(nil)
|
||||
_ io.WriterTo = (*pageRef)(nil)
|
||||
)
|
||||
|
||||
type pageRefAllocator struct {
|
||||
refs []pageRef
|
||||
head int
|
||||
size int
|
||||
}
|
||||
|
||||
func (a *pageRefAllocator) newPageRef() *pageRef {
|
||||
if a.head == len(a.refs) {
|
||||
a.refs = make([]pageRef, a.size)
|
||||
a.head = 0
|
||||
}
|
||||
ref := &a.refs[a.head]
|
||||
a.head++
|
||||
return ref
|
||||
}
|
||||
|
||||
func unref(x interface{}) {
|
||||
if r, _ := x.(interface{ unref() }); r != nil {
|
||||
r.unref()
|
||||
}
|
||||
}
|
||||
|
||||
func seek(cursor, limit, offset int64, whence int) (int64, error) {
|
||||
switch whence {
|
||||
case io.SeekStart:
|
||||
// absolute offset
|
||||
case io.SeekCurrent:
|
||||
offset = cursor + offset
|
||||
case io.SeekEnd:
|
||||
offset = limit - offset
|
||||
default:
|
||||
return -1, fmt.Errorf("seek: invalid whence value: %d", whence)
|
||||
}
|
||||
if offset < 0 {
|
||||
offset = 0
|
||||
}
|
||||
if offset > limit {
|
||||
offset = limit
|
||||
}
|
||||
return offset, nil
|
||||
}
|
||||
|
||||
func closeBytes(b Bytes) {
|
||||
if b != nil {
|
||||
b.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func resetBytes(b Bytes) {
|
||||
if r, _ := b.(interface{ Reset() }); r != nil {
|
||||
r.Reset()
|
||||
}
|
||||
}
|
||||
143
tap/extensions/kafka/cluster.go
Normal file
143
tap/extensions/kafka/cluster.go
Normal file
@@ -0,0 +1,143 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
)
|
||||
|
||||
type Cluster struct {
|
||||
ClusterID string
|
||||
Controller int32
|
||||
Brokers map[int32]Broker
|
||||
Topics map[string]Topic
|
||||
}
|
||||
|
||||
func (c Cluster) BrokerIDs() []int32 {
|
||||
brokerIDs := make([]int32, 0, len(c.Brokers))
|
||||
for id := range c.Brokers {
|
||||
brokerIDs = append(brokerIDs, id)
|
||||
}
|
||||
sort.Slice(brokerIDs, func(i, j int) bool {
|
||||
return brokerIDs[i] < brokerIDs[j]
|
||||
})
|
||||
return brokerIDs
|
||||
}
|
||||
|
||||
func (c Cluster) TopicNames() []string {
|
||||
topicNames := make([]string, 0, len(c.Topics))
|
||||
for name := range c.Topics {
|
||||
topicNames = append(topicNames, name)
|
||||
}
|
||||
sort.Strings(topicNames)
|
||||
return topicNames
|
||||
}
|
||||
|
||||
func (c Cluster) IsZero() bool {
|
||||
return c.ClusterID == "" && c.Controller == 0 && len(c.Brokers) == 0 && len(c.Topics) == 0
|
||||
}
|
||||
|
||||
func (c Cluster) Format(w fmt.State, _ rune) {
|
||||
tw := new(tabwriter.Writer)
|
||||
fmt.Fprintf(w, "CLUSTER: %q\n\n", c.ClusterID)
|
||||
|
||||
tw.Init(w, 0, 8, 2, ' ', 0)
|
||||
fmt.Fprint(tw, " BROKER\tHOST\tPORT\tRACK\tCONTROLLER\n")
|
||||
|
||||
for _, id := range c.BrokerIDs() {
|
||||
broker := c.Brokers[id]
|
||||
fmt.Fprintf(tw, " %d\t%s\t%d\t%s\t%t\n", broker.ID, broker.Host, broker.Port, broker.Rack, broker.ID == c.Controller)
|
||||
}
|
||||
|
||||
tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
|
||||
tw.Init(w, 0, 8, 2, ' ', 0)
|
||||
fmt.Fprint(tw, " TOPIC\tPARTITIONS\tBROKERS\n")
|
||||
topicNames := c.TopicNames()
|
||||
brokers := make(map[int32]struct{}, len(c.Brokers))
|
||||
brokerIDs := make([]int32, 0, len(c.Brokers))
|
||||
|
||||
for _, name := range topicNames {
|
||||
topic := c.Topics[name]
|
||||
|
||||
for _, p := range topic.Partitions {
|
||||
for _, id := range p.Replicas {
|
||||
brokers[id] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
for id := range brokers {
|
||||
brokerIDs = append(brokerIDs, id)
|
||||
}
|
||||
|
||||
fmt.Fprintf(tw, " %s\t%d\t%s\n", topic.Name, len(topic.Partitions), formatBrokerIDs(brokerIDs, -1))
|
||||
|
||||
for id := range brokers {
|
||||
delete(brokers, id)
|
||||
}
|
||||
|
||||
brokerIDs = brokerIDs[:0]
|
||||
}
|
||||
|
||||
tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
|
||||
if w.Flag('+') {
|
||||
for _, name := range topicNames {
|
||||
fmt.Fprintf(w, " TOPIC: %q\n\n", name)
|
||||
|
||||
tw.Init(w, 0, 8, 2, ' ', 0)
|
||||
fmt.Fprint(tw, " PARTITION\tREPLICAS\tISR\tOFFLINE\n")
|
||||
|
||||
for _, p := range c.Topics[name].Partitions {
|
||||
fmt.Fprintf(tw, " %d\t%s\t%s\t%s\n", p.ID,
|
||||
formatBrokerIDs(p.Replicas, -1),
|
||||
formatBrokerIDs(p.ISR, p.Leader),
|
||||
formatBrokerIDs(p.Offline, -1),
|
||||
)
|
||||
}
|
||||
|
||||
tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func formatBrokerIDs(brokerIDs []int32, leader int32) string {
|
||||
if len(brokerIDs) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
if len(brokerIDs) == 1 {
|
||||
return itoa(brokerIDs[0])
|
||||
}
|
||||
|
||||
sort.Slice(brokerIDs, func(i, j int) bool {
|
||||
id1 := brokerIDs[i]
|
||||
id2 := brokerIDs[j]
|
||||
|
||||
if id1 == leader {
|
||||
return true
|
||||
}
|
||||
|
||||
if id2 == leader {
|
||||
return false
|
||||
}
|
||||
|
||||
return id1 < id2
|
||||
})
|
||||
|
||||
brokerNames := make([]string, len(brokerIDs))
|
||||
|
||||
for i, id := range brokerIDs {
|
||||
brokerNames[i] = itoa(id)
|
||||
}
|
||||
|
||||
return strings.Join(brokerNames, ",")
|
||||
}
|
||||
|
||||
var (
|
||||
_ fmt.Formatter = Cluster{}
|
||||
)
|
||||
30
tap/extensions/kafka/compression.go
Normal file
30
tap/extensions/kafka/compression.go
Normal file
@@ -0,0 +1,30 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/segmentio/kafka-go/compress"
|
||||
)
|
||||
|
||||
type Compression = compress.Compression
|
||||
|
||||
type CompressionCodec = compress.Codec
|
||||
|
||||
// TODO: this file should probably go away once the internals of the package
|
||||
// have moved to use the protocol package.
|
||||
const (
|
||||
compressionCodecMask = 0x07
|
||||
)
|
||||
|
||||
var (
|
||||
errUnknownCodec = errors.New("the compression code is invalid or its codec has not been imported")
|
||||
)
|
||||
|
||||
// resolveCodec looks up a codec by Code()
|
||||
func resolveCodec(code int8) (CompressionCodec, error) {
|
||||
codec := compress.Compression(code).Codec()
|
||||
if codec == nil {
|
||||
return nil, errUnknownCodec
|
||||
}
|
||||
return codec, nil
|
||||
}
|
||||
598
tap/extensions/kafka/decode.go
Normal file
598
tap/extensions/kafka/decode.go
Normal file
@@ -0,0 +1,598 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type discarder interface {
|
||||
Discard(int) (int, error)
|
||||
}
|
||||
|
||||
type decoder struct {
|
||||
reader io.Reader
|
||||
remain int
|
||||
buffer [8]byte
|
||||
err error
|
||||
table *crc32.Table
|
||||
crc32 uint32
|
||||
}
|
||||
|
||||
func (d *decoder) Reset(r io.Reader, n int) {
|
||||
d.reader = r
|
||||
d.remain = n
|
||||
d.buffer = [8]byte{}
|
||||
d.err = nil
|
||||
d.table = nil
|
||||
d.crc32 = 0
|
||||
}
|
||||
|
||||
func (d *decoder) Read(b []byte) (int, error) {
|
||||
if d.err != nil {
|
||||
return 0, d.err
|
||||
}
|
||||
if d.remain == 0 {
|
||||
return 0, io.EOF
|
||||
}
|
||||
if len(b) > d.remain {
|
||||
b = b[:d.remain]
|
||||
}
|
||||
n, err := d.reader.Read(b)
|
||||
if n > 0 && d.table != nil {
|
||||
d.crc32 = crc32.Update(d.crc32, d.table, b[:n])
|
||||
}
|
||||
d.remain -= n
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (d *decoder) ReadByte() (byte, error) {
|
||||
c := d.readByte()
|
||||
return c, d.err
|
||||
}
|
||||
|
||||
func (d *decoder) done() bool {
|
||||
return d.remain == 0 || d.err != nil
|
||||
}
|
||||
|
||||
func (d *decoder) setCRC(table *crc32.Table) {
|
||||
d.table, d.crc32 = table, 0
|
||||
}
|
||||
|
||||
func (d *decoder) decodeBool(v value) {
|
||||
v.setBool(d.readBool())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeInt8(v value) {
|
||||
v.setInt8(d.readInt8())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeInt16(v value) {
|
||||
v.setInt16(d.readInt16())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeInt32(v value) {
|
||||
v.setInt32(d.readInt32())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeInt64(v value) {
|
||||
v.setInt64(d.readInt64())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeString(v value) {
|
||||
v.setString(d.readString())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeCompactString(v value) {
|
||||
v.setString(d.readCompactString())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeBytes(v value) {
|
||||
v.setBytes(d.readBytes())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeCompactBytes(v value) {
|
||||
v.setBytes(d.readCompactBytes())
|
||||
}
|
||||
|
||||
func (d *decoder) decodeArray(v value, elemType reflect.Type, decodeElem decodeFunc) {
|
||||
if n := d.readInt32(); n < 0 || n > 65535 {
|
||||
v.setArray(array{})
|
||||
} else {
|
||||
a := makeArray(elemType, int(n))
|
||||
for i := 0; i < int(n) && d.remain > 0; i++ {
|
||||
decodeElem(d, a.index(i))
|
||||
}
|
||||
v.setArray(a)
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) decodeCompactArray(v value, elemType reflect.Type, decodeElem decodeFunc) {
|
||||
if n := d.readUnsignedVarInt(); n < 1 || n > 65535 {
|
||||
v.setArray(array{})
|
||||
} else {
|
||||
a := makeArray(elemType, int(n-1))
|
||||
for i := 0; i < int(n-1) && d.remain > 0; i++ {
|
||||
decodeElem(d, a.index(i))
|
||||
}
|
||||
v.setArray(a)
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) decodeRecordV0(v value) {
|
||||
x := &RecordV0{}
|
||||
x.Unknown = d.readInt8()
|
||||
x.Attributes = d.readInt8()
|
||||
x.TimestampDelta = d.readInt8()
|
||||
x.OffsetDelta = d.readInt8()
|
||||
|
||||
x.KeyLength = int8(d.readVarInt())
|
||||
key := strings.Builder{}
|
||||
for i := 0; i < int(x.KeyLength); i++ {
|
||||
key.WriteString(fmt.Sprintf("%c", d.readInt8()))
|
||||
}
|
||||
x.Key = key.String()
|
||||
|
||||
x.ValueLen = int8(d.readVarInt())
|
||||
value := strings.Builder{}
|
||||
for i := 0; i < int(x.ValueLen); i++ {
|
||||
value.WriteString(fmt.Sprintf("%c", d.readInt8()))
|
||||
}
|
||||
x.Value = value.String()
|
||||
|
||||
headerLen := d.readInt8() / 2
|
||||
headers := make([]RecordHeader, 0)
|
||||
for i := 0; i < int(headerLen); i++ {
|
||||
header := &RecordHeader{}
|
||||
|
||||
header.HeaderKeyLength = int8(d.readVarInt())
|
||||
headerKey := strings.Builder{}
|
||||
for j := 0; j < int(header.HeaderKeyLength); j++ {
|
||||
headerKey.WriteString(fmt.Sprintf("%c", d.readInt8()))
|
||||
}
|
||||
header.HeaderKey = headerKey.String()
|
||||
|
||||
header.HeaderValueLength = int8(d.readVarInt())
|
||||
headerValue := strings.Builder{}
|
||||
for j := 0; j < int(header.HeaderValueLength); j++ {
|
||||
headerValue.WriteString(fmt.Sprintf("%c", d.readInt8()))
|
||||
}
|
||||
header.Value = headerValue.String()
|
||||
|
||||
headers = append(headers, *header)
|
||||
}
|
||||
x.Headers = headers
|
||||
|
||||
v.val.Set(valueOf(x).val)
|
||||
}
|
||||
|
||||
func (d *decoder) discardAll() {
|
||||
d.discard(d.remain)
|
||||
}
|
||||
|
||||
func (d *decoder) discard(n int) {
|
||||
if n > d.remain {
|
||||
n = d.remain
|
||||
}
|
||||
var err error
|
||||
if r, _ := d.reader.(discarder); r != nil {
|
||||
n, err = r.Discard(n)
|
||||
d.remain -= n
|
||||
} else {
|
||||
_, err = io.Copy(ioutil.Discard, d)
|
||||
}
|
||||
d.setError(err)
|
||||
}
|
||||
|
||||
func (d *decoder) read(n int) []byte {
|
||||
b := make([]byte, n)
|
||||
n, err := io.ReadFull(d, b)
|
||||
b = b[:n]
|
||||
d.setError(err)
|
||||
return b
|
||||
}
|
||||
|
||||
func (d *decoder) writeTo(w io.Writer, n int) {
|
||||
limit := d.remain
|
||||
if n < limit {
|
||||
d.remain = n
|
||||
}
|
||||
c, err := io.Copy(w, d)
|
||||
if int(c) < n && err == nil {
|
||||
err = io.ErrUnexpectedEOF
|
||||
}
|
||||
d.remain = limit - int(c)
|
||||
d.setError(err)
|
||||
}
|
||||
|
||||
func (d *decoder) setError(err error) {
|
||||
if d.err == nil && err != nil {
|
||||
d.err = err
|
||||
d.discardAll()
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readFull(b []byte) bool {
|
||||
n, err := io.ReadFull(d, b)
|
||||
d.setError(err)
|
||||
return n == len(b)
|
||||
}
|
||||
|
||||
func (d *decoder) readByte() byte {
|
||||
if d.readFull(d.buffer[:1]) {
|
||||
return d.buffer[0]
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (d *decoder) readBool() bool {
|
||||
return d.readByte() != 0
|
||||
}
|
||||
|
||||
func (d *decoder) readInt8() int8 {
|
||||
if d.readFull(d.buffer[:1]) {
|
||||
return decodeReadInt8(d.buffer[:1])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (d *decoder) readInt16() int16 {
|
||||
if d.readFull(d.buffer[:2]) {
|
||||
return decodeReadInt16(d.buffer[:2])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (d *decoder) readInt32() int32 {
|
||||
if d.readFull(d.buffer[:4]) {
|
||||
return decodeReadInt32(d.buffer[:4])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (d *decoder) readInt64() int64 {
|
||||
if d.readFull(d.buffer[:8]) {
|
||||
return decodeReadInt64(d.buffer[:8])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (d *decoder) readString() string {
|
||||
if n := d.readInt16(); n < 0 {
|
||||
return ""
|
||||
} else {
|
||||
return bytesToString(d.read(int(n)))
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readVarString() string {
|
||||
if n := d.readVarInt(); n < 0 {
|
||||
return ""
|
||||
} else {
|
||||
return bytesToString(d.read(int(n)))
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readCompactString() string {
|
||||
if n := d.readUnsignedVarInt(); n < 1 {
|
||||
return ""
|
||||
} else {
|
||||
return bytesToString(d.read(int(n - 1)))
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readBytes() []byte {
|
||||
if n := d.readInt32(); n < 0 {
|
||||
return nil
|
||||
} else {
|
||||
return d.read(int(n))
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readBytesTo(w io.Writer) bool {
|
||||
if n := d.readInt32(); n < 0 {
|
||||
return false
|
||||
} else {
|
||||
d.writeTo(w, int(n))
|
||||
return d.err == nil
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readVarBytes() []byte {
|
||||
if n := d.readVarInt(); n < 0 {
|
||||
return nil
|
||||
} else {
|
||||
return d.read(int(n))
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readVarBytesTo(w io.Writer) bool {
|
||||
if n := d.readVarInt(); n < 0 {
|
||||
return false
|
||||
} else {
|
||||
d.writeTo(w, int(n))
|
||||
return d.err == nil
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readCompactBytes() []byte {
|
||||
if n := d.readUnsignedVarInt(); n < 1 {
|
||||
return nil
|
||||
} else {
|
||||
return d.read(int(n - 1))
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readCompactBytesTo(w io.Writer) bool {
|
||||
if n := d.readUnsignedVarInt(); n < 1 {
|
||||
return false
|
||||
} else {
|
||||
d.writeTo(w, int(n-1))
|
||||
return d.err == nil
|
||||
}
|
||||
}
|
||||
|
||||
func (d *decoder) readVarInt() int64 {
|
||||
n := 11 // varints are at most 11 bytes
|
||||
|
||||
if n > d.remain {
|
||||
n = d.remain
|
||||
}
|
||||
|
||||
x := uint64(0)
|
||||
s := uint(0)
|
||||
|
||||
for n > 0 {
|
||||
b := d.readByte()
|
||||
|
||||
if (b & 0x80) == 0 {
|
||||
x |= uint64(b) << s
|
||||
return int64(x>>1) ^ -(int64(x) & 1)
|
||||
}
|
||||
|
||||
x |= uint64(b&0x7f) << s
|
||||
s += 7
|
||||
n--
|
||||
}
|
||||
|
||||
d.setError(fmt.Errorf("cannot decode varint from input stream"))
|
||||
return 0
|
||||
}
|
||||
|
||||
func (d *decoder) readUnsignedVarInt() uint64 {
|
||||
n := 11 // varints are at most 11 bytes
|
||||
|
||||
if n > d.remain {
|
||||
n = d.remain
|
||||
}
|
||||
|
||||
x := uint64(0)
|
||||
s := uint(0)
|
||||
|
||||
for n > 0 {
|
||||
b := d.readByte()
|
||||
|
||||
if (b & 0x80) == 0 {
|
||||
x |= uint64(b) << s
|
||||
return x
|
||||
}
|
||||
|
||||
x |= uint64(b&0x7f) << s
|
||||
s += 7
|
||||
n--
|
||||
}
|
||||
|
||||
d.setError(fmt.Errorf("cannot decode unsigned varint from input stream"))
|
||||
return 0
|
||||
}
|
||||
|
||||
type decodeFunc func(*decoder, value)
|
||||
|
||||
var (
|
||||
_ io.Reader = (*decoder)(nil)
|
||||
_ io.ByteReader = (*decoder)(nil)
|
||||
|
||||
readerFrom = reflect.TypeOf((*io.ReaderFrom)(nil)).Elem()
|
||||
)
|
||||
|
||||
func decodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) decodeFunc {
|
||||
if reflect.PtrTo(typ).Implements(readerFrom) {
|
||||
return readerDecodeFuncOf(typ)
|
||||
}
|
||||
switch typ.Kind() {
|
||||
case reflect.Bool:
|
||||
return (*decoder).decodeBool
|
||||
case reflect.Int8:
|
||||
return (*decoder).decodeInt8
|
||||
case reflect.Int16:
|
||||
return (*decoder).decodeInt16
|
||||
case reflect.Int32:
|
||||
return (*decoder).decodeInt32
|
||||
case reflect.Int64:
|
||||
return (*decoder).decodeInt64
|
||||
case reflect.String:
|
||||
return stringDecodeFuncOf(flexible, tag)
|
||||
case reflect.Struct:
|
||||
return structDecodeFuncOf(typ, version, flexible)
|
||||
case reflect.Slice:
|
||||
if typ.Elem().Kind() == reflect.Uint8 { // []byte
|
||||
return bytesDecodeFuncOf(flexible, tag)
|
||||
}
|
||||
return arrayDecodeFuncOf(typ, version, flexible, tag)
|
||||
default:
|
||||
panic("unsupported type: " + typ.String())
|
||||
}
|
||||
}
|
||||
|
||||
func stringDecodeFuncOf(flexible bool, tag structTag) decodeFunc {
|
||||
if flexible {
|
||||
// In flexible messages, all strings are compact
|
||||
return (*decoder).decodeCompactString
|
||||
}
|
||||
return (*decoder).decodeString
|
||||
}
|
||||
|
||||
func bytesDecodeFuncOf(flexible bool, tag structTag) decodeFunc {
|
||||
if flexible {
|
||||
// In flexible messages, all arrays are compact
|
||||
return (*decoder).decodeCompactBytes
|
||||
}
|
||||
return (*decoder).decodeBytes
|
||||
}
|
||||
|
||||
func structDecodeFuncOf(typ reflect.Type, version int16, flexible bool) decodeFunc {
|
||||
type field struct {
|
||||
decode decodeFunc
|
||||
index index
|
||||
tagID int
|
||||
}
|
||||
|
||||
var fields []field
|
||||
taggedFields := map[int]*field{}
|
||||
|
||||
if typ == reflect.TypeOf(RecordV0{}) {
|
||||
return (*decoder).decodeRecordV0
|
||||
}
|
||||
|
||||
forEachStructField(typ, func(typ reflect.Type, index index, tag string) {
|
||||
forEachStructTag(tag, func(tag structTag) bool {
|
||||
if tag.MinVersion <= version && version <= tag.MaxVersion {
|
||||
f := field{
|
||||
decode: decodeFuncOf(typ, version, flexible, tag),
|
||||
index: index,
|
||||
tagID: tag.TagID,
|
||||
}
|
||||
|
||||
if tag.TagID < -1 {
|
||||
// Normal required field
|
||||
fields = append(fields, f)
|
||||
} else {
|
||||
// Optional tagged field (flexible messages only)
|
||||
taggedFields[tag.TagID] = &f
|
||||
}
|
||||
return false
|
||||
}
|
||||
return true
|
||||
})
|
||||
})
|
||||
|
||||
return func(d *decoder, v value) {
|
||||
for i := range fields {
|
||||
f := &fields[i]
|
||||
f.decode(d, v.fieldByIndex(f.index))
|
||||
}
|
||||
|
||||
if flexible {
|
||||
// See https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
|
||||
// for details of tag buffers in "flexible" messages.
|
||||
n := int(d.readUnsignedVarInt())
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
tagID := int(d.readUnsignedVarInt())
|
||||
size := int(d.readUnsignedVarInt())
|
||||
|
||||
f, ok := taggedFields[tagID]
|
||||
if ok {
|
||||
f.decode(d, v.fieldByIndex(f.index))
|
||||
} else {
|
||||
d.read(size)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func arrayDecodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) decodeFunc {
|
||||
elemType := typ.Elem()
|
||||
elemFunc := decodeFuncOf(elemType, version, flexible, tag)
|
||||
if flexible {
|
||||
// In flexible messages, all arrays are compact
|
||||
return func(d *decoder, v value) { d.decodeCompactArray(v, elemType, elemFunc) }
|
||||
}
|
||||
|
||||
return func(d *decoder, v value) { d.decodeArray(v, elemType, elemFunc) }
|
||||
}
|
||||
|
||||
func readerDecodeFuncOf(typ reflect.Type) decodeFunc {
|
||||
typ = reflect.PtrTo(typ)
|
||||
return func(d *decoder, v value) {
|
||||
if d.err == nil {
|
||||
_, err := v.iface(typ).(io.ReaderFrom).ReadFrom(d)
|
||||
if err != nil {
|
||||
d.setError(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func decodeReadInt8(b []byte) int8 {
|
||||
return int8(b[0])
|
||||
}
|
||||
|
||||
func decodeReadInt16(b []byte) int16 {
|
||||
return int16(binary.BigEndian.Uint16(b))
|
||||
}
|
||||
|
||||
func decodeReadInt32(b []byte) int32 {
|
||||
return int32(binary.BigEndian.Uint32(b))
|
||||
}
|
||||
|
||||
func decodeReadInt64(b []byte) int64 {
|
||||
return int64(binary.BigEndian.Uint64(b))
|
||||
}
|
||||
|
||||
func Unmarshal(data []byte, version int16, value interface{}) error {
|
||||
typ := elemTypeOf(value)
|
||||
cache, _ := unmarshalers.Load().(map[versionedType]decodeFunc)
|
||||
key := versionedType{typ: typ, version: version}
|
||||
decode := cache[key]
|
||||
|
||||
if decode == nil {
|
||||
decode = decodeFuncOf(reflect.TypeOf(value).Elem(), version, false, structTag{
|
||||
MinVersion: -1,
|
||||
MaxVersion: -1,
|
||||
TagID: -2,
|
||||
Compact: true,
|
||||
Nullable: true,
|
||||
})
|
||||
|
||||
newCache := make(map[versionedType]decodeFunc, len(cache)+1)
|
||||
newCache[key] = decode
|
||||
|
||||
for typ, fun := range cache {
|
||||
newCache[typ] = fun
|
||||
}
|
||||
|
||||
unmarshalers.Store(newCache)
|
||||
}
|
||||
|
||||
d, _ := decoders.Get().(*decoder)
|
||||
if d == nil {
|
||||
d = &decoder{reader: bytes.NewReader(nil)}
|
||||
}
|
||||
|
||||
d.remain = len(data)
|
||||
r, _ := d.reader.(*bytes.Reader)
|
||||
r.Reset(data)
|
||||
|
||||
defer func() {
|
||||
r.Reset(nil)
|
||||
d.Reset(r, 0)
|
||||
decoders.Put(d)
|
||||
}()
|
||||
|
||||
decode(d, valueOf(value))
|
||||
return dontExpectEOF(d.err)
|
||||
}
|
||||
|
||||
var (
|
||||
decoders sync.Pool // *decoder
|
||||
unmarshalers atomic.Value // map[versionedType]decodeFunc
|
||||
)
|
||||
50
tap/extensions/kafka/discard.go
Normal file
50
tap/extensions/kafka/discard.go
Normal file
@@ -0,0 +1,50 @@
|
||||
package main
|
||||
|
||||
import "bufio"
|
||||
|
||||
func discardN(r *bufio.Reader, sz int, n int) (int, error) {
|
||||
var err error
|
||||
if n <= sz {
|
||||
n, err = r.Discard(n)
|
||||
} else {
|
||||
n, err = r.Discard(sz)
|
||||
if err == nil {
|
||||
err = errShortRead
|
||||
}
|
||||
}
|
||||
return sz - n, err
|
||||
}
|
||||
|
||||
func discardInt8(r *bufio.Reader, sz int) (int, error) {
|
||||
return discardN(r, sz, 1)
|
||||
}
|
||||
|
||||
func discardInt16(r *bufio.Reader, sz int) (int, error) {
|
||||
return discardN(r, sz, 2)
|
||||
}
|
||||
|
||||
func discardInt32(r *bufio.Reader, sz int) (int, error) {
|
||||
return discardN(r, sz, 4)
|
||||
}
|
||||
|
||||
func discardInt64(r *bufio.Reader, sz int) (int, error) {
|
||||
return discardN(r, sz, 8)
|
||||
}
|
||||
|
||||
func discardString(r *bufio.Reader, sz int) (int, error) {
|
||||
return readStringWith(r, sz, func(r *bufio.Reader, sz int, n int) (int, error) {
|
||||
if n < 0 {
|
||||
return sz, nil
|
||||
}
|
||||
return discardN(r, sz, n)
|
||||
})
|
||||
}
|
||||
|
||||
func discardBytes(r *bufio.Reader, sz int) (int, error) {
|
||||
return readBytesWith(r, sz, func(r *bufio.Reader, sz int, n int) (int, error) {
|
||||
if n < 0 {
|
||||
return sz, nil
|
||||
}
|
||||
return discardN(r, sz, n)
|
||||
})
|
||||
}
|
||||
645
tap/extensions/kafka/encode.go
Normal file
645
tap/extensions/kafka/encode.go
Normal file
@@ -0,0 +1,645 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"io"
|
||||
"reflect"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type encoder struct {
|
||||
writer io.Writer
|
||||
err error
|
||||
table *crc32.Table
|
||||
crc32 uint32
|
||||
buffer [32]byte
|
||||
}
|
||||
|
||||
type encoderChecksum struct {
|
||||
reader io.Reader
|
||||
encoder *encoder
|
||||
}
|
||||
|
||||
func (e *encoderChecksum) Read(b []byte) (int, error) {
|
||||
n, err := e.reader.Read(b)
|
||||
if n > 0 {
|
||||
e.encoder.update(b[:n])
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (e *encoder) Reset(w io.Writer) {
|
||||
e.writer = w
|
||||
e.err = nil
|
||||
e.table = nil
|
||||
e.crc32 = 0
|
||||
e.buffer = [32]byte{}
|
||||
}
|
||||
|
||||
func (e *encoder) ReadFrom(r io.Reader) (int64, error) {
|
||||
if e.table != nil {
|
||||
r = &encoderChecksum{
|
||||
reader: r,
|
||||
encoder: e,
|
||||
}
|
||||
}
|
||||
return io.Copy(e.writer, r)
|
||||
}
|
||||
|
||||
func (e *encoder) Write(b []byte) (int, error) {
|
||||
if e.err != nil {
|
||||
return 0, e.err
|
||||
}
|
||||
n, err := e.writer.Write(b)
|
||||
if n > 0 {
|
||||
e.update(b[:n])
|
||||
}
|
||||
if err != nil {
|
||||
e.err = err
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (e *encoder) WriteByte(b byte) error {
|
||||
e.buffer[0] = b
|
||||
_, err := e.Write(e.buffer[:1])
|
||||
return err
|
||||
}
|
||||
|
||||
func (e *encoder) WriteString(s string) (int, error) {
|
||||
// This implementation is an optimization to avoid the heap allocation that
|
||||
// would occur when converting the string to a []byte to call crc32.Update.
|
||||
//
|
||||
// Strings are rarely long in the kafka protocol, so the use of a 32 byte
|
||||
// buffer is a good comprise between keeping the encoder value small and
|
||||
// limiting the number of calls to Write.
|
||||
//
|
||||
// We introduced this optimization because memory profiles on the benchmarks
|
||||
// showed that most heap allocations were caused by this code path.
|
||||
n := 0
|
||||
|
||||
for len(s) != 0 {
|
||||
c := copy(e.buffer[:], s)
|
||||
w, err := e.Write(e.buffer[:c])
|
||||
n += w
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
s = s[c:]
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (e *encoder) setCRC(table *crc32.Table) {
|
||||
e.table, e.crc32 = table, 0
|
||||
}
|
||||
|
||||
func (e *encoder) update(b []byte) {
|
||||
if e.table != nil {
|
||||
e.crc32 = crc32.Update(e.crc32, e.table, b)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) encodeBool(v value) {
|
||||
b := int8(0)
|
||||
if v.bool() {
|
||||
b = 1
|
||||
}
|
||||
e.writeInt8(b)
|
||||
}
|
||||
|
||||
func (e *encoder) encodeInt8(v value) {
|
||||
e.writeInt8(v.int8())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeInt16(v value) {
|
||||
e.writeInt16(v.int16())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeInt32(v value) {
|
||||
e.writeInt32(v.int32())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeInt64(v value) {
|
||||
e.writeInt64(v.int64())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeString(v value) {
|
||||
e.writeString(v.string())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeVarString(v value) {
|
||||
e.writeVarString(v.string())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeCompactString(v value) {
|
||||
e.writeCompactString(v.string())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeNullString(v value) {
|
||||
e.writeNullString(v.string())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeVarNullString(v value) {
|
||||
e.writeVarNullString(v.string())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeCompactNullString(v value) {
|
||||
e.writeCompactNullString(v.string())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeBytes(v value) {
|
||||
e.writeBytes(v.bytes())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeVarBytes(v value) {
|
||||
e.writeVarBytes(v.bytes())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeCompactBytes(v value) {
|
||||
e.writeCompactBytes(v.bytes())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeNullBytes(v value) {
|
||||
e.writeNullBytes(v.bytes())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeVarNullBytes(v value) {
|
||||
e.writeVarNullBytes(v.bytes())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeCompactNullBytes(v value) {
|
||||
e.writeCompactNullBytes(v.bytes())
|
||||
}
|
||||
|
||||
func (e *encoder) encodeArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
|
||||
a := v.array(elemType)
|
||||
n := a.length()
|
||||
e.writeInt32(int32(n))
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
encodeElem(e, a.index(i))
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) encodeCompactArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
|
||||
a := v.array(elemType)
|
||||
n := a.length()
|
||||
e.writeUnsignedVarInt(uint64(n + 1))
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
encodeElem(e, a.index(i))
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) encodeNullArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
|
||||
a := v.array(elemType)
|
||||
if a.isNil() {
|
||||
e.writeInt32(-1)
|
||||
return
|
||||
}
|
||||
|
||||
n := a.length()
|
||||
e.writeInt32(int32(n))
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
encodeElem(e, a.index(i))
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) encodeCompactNullArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
|
||||
a := v.array(elemType)
|
||||
if a.isNil() {
|
||||
e.writeUnsignedVarInt(0)
|
||||
return
|
||||
}
|
||||
|
||||
n := a.length()
|
||||
e.writeUnsignedVarInt(uint64(n + 1))
|
||||
for i := 0; i < n; i++ {
|
||||
encodeElem(e, a.index(i))
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeInt8(i int8) {
|
||||
writeInt8(e.buffer[:1], i)
|
||||
e.Write(e.buffer[:1])
|
||||
}
|
||||
|
||||
func (e *encoder) writeInt16(i int16) {
|
||||
writeInt16(e.buffer[:2], i)
|
||||
e.Write(e.buffer[:2])
|
||||
}
|
||||
|
||||
func (e *encoder) writeInt32(i int32) {
|
||||
writeInt32(e.buffer[:4], i)
|
||||
e.Write(e.buffer[:4])
|
||||
}
|
||||
|
||||
func (e *encoder) writeInt64(i int64) {
|
||||
writeInt64(e.buffer[:8], i)
|
||||
e.Write(e.buffer[:8])
|
||||
}
|
||||
|
||||
func (e *encoder) writeString(s string) {
|
||||
e.writeInt16(int16(len(s)))
|
||||
e.WriteString(s)
|
||||
}
|
||||
|
||||
func (e *encoder) writeVarString(s string) {
|
||||
e.writeVarInt(int64(len(s)))
|
||||
e.WriteString(s)
|
||||
}
|
||||
|
||||
func (e *encoder) writeCompactString(s string) {
|
||||
e.writeUnsignedVarInt(uint64(len(s)) + 1)
|
||||
e.WriteString(s)
|
||||
}
|
||||
|
||||
func (e *encoder) writeNullString(s string) {
|
||||
if s == "" {
|
||||
e.writeInt16(-1)
|
||||
} else {
|
||||
e.writeInt16(int16(len(s)))
|
||||
e.WriteString(s)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeVarNullString(s string) {
|
||||
if s == "" {
|
||||
e.writeVarInt(-1)
|
||||
} else {
|
||||
e.writeVarInt(int64(len(s)))
|
||||
e.WriteString(s)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeCompactNullString(s string) {
|
||||
if s == "" {
|
||||
e.writeUnsignedVarInt(0)
|
||||
} else {
|
||||
e.writeUnsignedVarInt(uint64(len(s)) + 1)
|
||||
e.WriteString(s)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeBytes(b []byte) {
|
||||
e.writeInt32(int32(len(b)))
|
||||
e.Write(b)
|
||||
}
|
||||
|
||||
func (e *encoder) writeVarBytes(b []byte) {
|
||||
e.writeVarInt(int64(len(b)))
|
||||
e.Write(b)
|
||||
}
|
||||
|
||||
func (e *encoder) writeCompactBytes(b []byte) {
|
||||
e.writeUnsignedVarInt(uint64(len(b)) + 1)
|
||||
e.Write(b)
|
||||
}
|
||||
|
||||
func (e *encoder) writeNullBytes(b []byte) {
|
||||
if b == nil {
|
||||
e.writeInt32(-1)
|
||||
} else {
|
||||
e.writeInt32(int32(len(b)))
|
||||
e.Write(b)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeVarNullBytes(b []byte) {
|
||||
if b == nil {
|
||||
e.writeVarInt(-1)
|
||||
} else {
|
||||
e.writeVarInt(int64(len(b)))
|
||||
e.Write(b)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeCompactNullBytes(b []byte) {
|
||||
if b == nil {
|
||||
e.writeUnsignedVarInt(0)
|
||||
} else {
|
||||
e.writeUnsignedVarInt(uint64(len(b)) + 1)
|
||||
e.Write(b)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeBytesFrom(b Bytes) error {
|
||||
size := int64(b.Len())
|
||||
e.writeInt32(int32(size))
|
||||
n, err := io.Copy(e, b)
|
||||
if err == nil && n != size {
|
||||
err = fmt.Errorf("size of bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (e *encoder) writeNullBytesFrom(b Bytes) error {
|
||||
if b == nil {
|
||||
e.writeInt32(-1)
|
||||
return nil
|
||||
} else {
|
||||
size := int64(b.Len())
|
||||
e.writeInt32(int32(size))
|
||||
n, err := io.Copy(e, b)
|
||||
if err == nil && n != size {
|
||||
err = fmt.Errorf("size of nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeVarNullBytesFrom(b Bytes) error {
|
||||
if b == nil {
|
||||
e.writeVarInt(-1)
|
||||
return nil
|
||||
} else {
|
||||
size := int64(b.Len())
|
||||
e.writeVarInt(size)
|
||||
n, err := io.Copy(e, b)
|
||||
if err == nil && n != size {
|
||||
err = fmt.Errorf("size of nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeCompactNullBytesFrom(b Bytes) error {
|
||||
if b == nil {
|
||||
e.writeUnsignedVarInt(0)
|
||||
return nil
|
||||
} else {
|
||||
size := int64(b.Len())
|
||||
e.writeUnsignedVarInt(uint64(size + 1))
|
||||
n, err := io.Copy(e, b)
|
||||
if err == nil && n != size {
|
||||
err = fmt.Errorf("size of compact nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func (e *encoder) writeVarInt(i int64) {
|
||||
e.writeUnsignedVarInt(uint64((i << 1) ^ (i >> 63)))
|
||||
}
|
||||
|
||||
func (e *encoder) writeUnsignedVarInt(i uint64) {
|
||||
b := e.buffer[:]
|
||||
n := 0
|
||||
|
||||
for i >= 0x80 && n < len(b) {
|
||||
b[n] = byte(i) | 0x80
|
||||
i >>= 7
|
||||
n++
|
||||
}
|
||||
|
||||
if n < len(b) {
|
||||
b[n] = byte(i)
|
||||
n++
|
||||
}
|
||||
|
||||
e.Write(b[:n])
|
||||
}
|
||||
|
||||
type encodeFunc func(*encoder, value)
|
||||
|
||||
var (
|
||||
_ io.ReaderFrom = (*encoder)(nil)
|
||||
_ io.Writer = (*encoder)(nil)
|
||||
_ io.ByteWriter = (*encoder)(nil)
|
||||
_ io.StringWriter = (*encoder)(nil)
|
||||
|
||||
writerTo = reflect.TypeOf((*io.WriterTo)(nil)).Elem()
|
||||
)
|
||||
|
||||
func encodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) encodeFunc {
|
||||
if reflect.PtrTo(typ).Implements(writerTo) {
|
||||
return writerEncodeFuncOf(typ)
|
||||
}
|
||||
switch typ.Kind() {
|
||||
case reflect.Bool:
|
||||
return (*encoder).encodeBool
|
||||
case reflect.Int8:
|
||||
return (*encoder).encodeInt8
|
||||
case reflect.Int16:
|
||||
return (*encoder).encodeInt16
|
||||
case reflect.Int32:
|
||||
return (*encoder).encodeInt32
|
||||
case reflect.Int64:
|
||||
return (*encoder).encodeInt64
|
||||
case reflect.String:
|
||||
return stringEncodeFuncOf(flexible, tag)
|
||||
case reflect.Struct:
|
||||
return structEncodeFuncOf(typ, version, flexible)
|
||||
case reflect.Slice:
|
||||
if typ.Elem().Kind() == reflect.Uint8 { // []byte
|
||||
return bytesEncodeFuncOf(flexible, tag)
|
||||
}
|
||||
return arrayEncodeFuncOf(typ, version, flexible, tag)
|
||||
default:
|
||||
panic("unsupported type: " + typ.String())
|
||||
}
|
||||
}
|
||||
|
||||
func stringEncodeFuncOf(flexible bool, tag structTag) encodeFunc {
|
||||
switch {
|
||||
case flexible && tag.Nullable:
|
||||
// In flexible messages, all strings are compact
|
||||
return (*encoder).encodeCompactNullString
|
||||
case flexible:
|
||||
// In flexible messages, all strings are compact
|
||||
return (*encoder).encodeCompactString
|
||||
case tag.Nullable:
|
||||
return (*encoder).encodeNullString
|
||||
default:
|
||||
return (*encoder).encodeString
|
||||
}
|
||||
}
|
||||
|
||||
func bytesEncodeFuncOf(flexible bool, tag structTag) encodeFunc {
|
||||
switch {
|
||||
case flexible && tag.Nullable:
|
||||
// In flexible messages, all arrays are compact
|
||||
return (*encoder).encodeCompactNullBytes
|
||||
case flexible:
|
||||
// In flexible messages, all arrays are compact
|
||||
return (*encoder).encodeCompactBytes
|
||||
case tag.Nullable:
|
||||
return (*encoder).encodeNullBytes
|
||||
default:
|
||||
return (*encoder).encodeBytes
|
||||
}
|
||||
}
|
||||
|
||||
func structEncodeFuncOf(typ reflect.Type, version int16, flexible bool) encodeFunc {
|
||||
type field struct {
|
||||
encode encodeFunc
|
||||
index index
|
||||
tagID int
|
||||
}
|
||||
|
||||
var fields []field
|
||||
var taggedFields []field
|
||||
|
||||
forEachStructField(typ, func(typ reflect.Type, index index, tag string) {
|
||||
if typ.Size() != 0 { // skip struct{}
|
||||
forEachStructTag(tag, func(tag structTag) bool {
|
||||
if tag.MinVersion <= version && version <= tag.MaxVersion {
|
||||
f := field{
|
||||
encode: encodeFuncOf(typ, version, flexible, tag),
|
||||
index: index,
|
||||
tagID: tag.TagID,
|
||||
}
|
||||
|
||||
if tag.TagID < -1 {
|
||||
// Normal required field
|
||||
fields = append(fields, f)
|
||||
} else {
|
||||
// Optional tagged field (flexible messages only)
|
||||
taggedFields = append(taggedFields, f)
|
||||
}
|
||||
return false
|
||||
}
|
||||
return true
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
return func(e *encoder, v value) {
|
||||
for i := range fields {
|
||||
f := &fields[i]
|
||||
f.encode(e, v.fieldByIndex(f.index))
|
||||
}
|
||||
|
||||
if flexible {
|
||||
// See https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
|
||||
// for details of tag buffers in "flexible" messages.
|
||||
e.writeUnsignedVarInt(uint64(len(taggedFields)))
|
||||
|
||||
for i := range taggedFields {
|
||||
f := &taggedFields[i]
|
||||
e.writeUnsignedVarInt(uint64(f.tagID))
|
||||
|
||||
buf := &bytes.Buffer{}
|
||||
se := &encoder{writer: buf}
|
||||
f.encode(se, v.fieldByIndex(f.index))
|
||||
e.writeUnsignedVarInt(uint64(buf.Len()))
|
||||
e.Write(buf.Bytes())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func arrayEncodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) encodeFunc {
|
||||
elemType := typ.Elem()
|
||||
elemFunc := encodeFuncOf(elemType, version, flexible, tag)
|
||||
switch {
|
||||
case flexible && tag.Nullable:
|
||||
// In flexible messages, all arrays are compact
|
||||
return func(e *encoder, v value) { e.encodeCompactNullArray(v, elemType, elemFunc) }
|
||||
case flexible:
|
||||
// In flexible messages, all arrays are compact
|
||||
return func(e *encoder, v value) { e.encodeCompactArray(v, elemType, elemFunc) }
|
||||
case tag.Nullable:
|
||||
return func(e *encoder, v value) { e.encodeNullArray(v, elemType, elemFunc) }
|
||||
default:
|
||||
return func(e *encoder, v value) { e.encodeArray(v, elemType, elemFunc) }
|
||||
}
|
||||
}
|
||||
|
||||
func writerEncodeFuncOf(typ reflect.Type) encodeFunc {
|
||||
typ = reflect.PtrTo(typ)
|
||||
return func(e *encoder, v value) {
|
||||
// Optimization to write directly into the buffer when the encoder
|
||||
// does no need to compute a crc32 checksum.
|
||||
w := io.Writer(e)
|
||||
if e.table == nil {
|
||||
w = e.writer
|
||||
}
|
||||
_, err := v.iface(typ).(io.WriterTo).WriteTo(w)
|
||||
if err != nil {
|
||||
e.err = err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func writeInt8(b []byte, i int8) {
|
||||
b[0] = byte(i)
|
||||
}
|
||||
|
||||
func writeInt16(b []byte, i int16) {
|
||||
binary.BigEndian.PutUint16(b, uint16(i))
|
||||
}
|
||||
|
||||
func writeInt32(b []byte, i int32) {
|
||||
binary.BigEndian.PutUint32(b, uint32(i))
|
||||
}
|
||||
|
||||
func writeInt64(b []byte, i int64) {
|
||||
binary.BigEndian.PutUint64(b, uint64(i))
|
||||
}
|
||||
|
||||
func Marshal(version int16, value interface{}) ([]byte, error) {
|
||||
typ := typeOf(value)
|
||||
cache, _ := marshalers.Load().(map[versionedType]encodeFunc)
|
||||
key := versionedType{typ: typ, version: version}
|
||||
encode := cache[key]
|
||||
|
||||
if encode == nil {
|
||||
encode = encodeFuncOf(reflect.TypeOf(value), version, false, structTag{
|
||||
MinVersion: -1,
|
||||
MaxVersion: -1,
|
||||
TagID: -2,
|
||||
Compact: true,
|
||||
Nullable: true,
|
||||
})
|
||||
|
||||
newCache := make(map[versionedType]encodeFunc, len(cache)+1)
|
||||
newCache[key] = encode
|
||||
|
||||
for typ, fun := range cache {
|
||||
newCache[typ] = fun
|
||||
}
|
||||
|
||||
marshalers.Store(newCache)
|
||||
}
|
||||
|
||||
e, _ := encoders.Get().(*encoder)
|
||||
if e == nil {
|
||||
e = &encoder{writer: new(bytes.Buffer)}
|
||||
}
|
||||
|
||||
b, _ := e.writer.(*bytes.Buffer)
|
||||
defer func() {
|
||||
b.Reset()
|
||||
e.Reset(b)
|
||||
encoders.Put(e)
|
||||
}()
|
||||
|
||||
encode(e, nonAddressableValueOf(value))
|
||||
|
||||
if e.err != nil {
|
||||
return nil, e.err
|
||||
}
|
||||
|
||||
buf := b.Bytes()
|
||||
out := make([]byte, len(buf))
|
||||
copy(out, buf)
|
||||
return out, nil
|
||||
}
|
||||
|
||||
type versionedType struct {
|
||||
typ _type
|
||||
version int16
|
||||
}
|
||||
|
||||
var (
|
||||
encoders sync.Pool // *encoder
|
||||
marshalers atomic.Value // map[versionedType]encodeFunc
|
||||
)
|
||||
91
tap/extensions/kafka/error.go
Normal file
91
tap/extensions/kafka/error.go
Normal file
@@ -0,0 +1,91 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Error represents client-side protocol errors.
|
||||
type Error string
|
||||
|
||||
func (e Error) Error() string { return string(e) }
|
||||
|
||||
func Errorf(msg string, args ...interface{}) Error {
|
||||
return Error(fmt.Sprintf(msg, args...))
|
||||
}
|
||||
|
||||
const (
|
||||
// ErrNoTopic is returned when a request needs to be sent to a specific
|
||||
ErrNoTopic Error = "topic not found"
|
||||
|
||||
// ErrNoPartition is returned when a request needs to be sent to a specific
|
||||
// partition, but the client did not find it in the cluster metadata.
|
||||
ErrNoPartition Error = "topic partition not found"
|
||||
|
||||
// ErrNoLeader is returned when a request needs to be sent to a partition
|
||||
// leader, but the client could not determine what the leader was at this
|
||||
// time.
|
||||
ErrNoLeader Error = "topic partition has no leader"
|
||||
|
||||
// ErrNoRecord is returned when attempting to write a message containing an
|
||||
// empty record set (which kafka forbids).
|
||||
//
|
||||
// We handle this case client-side because kafka will close the connection
|
||||
// that it received an empty produce request on, causing all concurrent
|
||||
// requests to be aborted.
|
||||
ErrNoRecord Error = "record set contains no records"
|
||||
|
||||
// ErrNoReset is returned by ResetRecordReader when the record reader does
|
||||
// not support being reset.
|
||||
ErrNoReset Error = "record sequence does not support reset"
|
||||
)
|
||||
|
||||
type TopicError struct {
|
||||
Topic string
|
||||
Err error
|
||||
}
|
||||
|
||||
func NewTopicError(topic string, err error) *TopicError {
|
||||
return &TopicError{Topic: topic, Err: err}
|
||||
}
|
||||
|
||||
func NewErrNoTopic(topic string) *TopicError {
|
||||
return NewTopicError(topic, ErrNoTopic)
|
||||
}
|
||||
|
||||
func (e *TopicError) Error() string {
|
||||
return fmt.Sprintf("%v (topic=%q)", e.Err, e.Topic)
|
||||
}
|
||||
|
||||
func (e *TopicError) Unwrap() error {
|
||||
return e.Err
|
||||
}
|
||||
|
||||
type TopicPartitionError struct {
|
||||
Topic string
|
||||
Partition int32
|
||||
Err error
|
||||
}
|
||||
|
||||
func NewTopicPartitionError(topic string, partition int32, err error) *TopicPartitionError {
|
||||
return &TopicPartitionError{
|
||||
Topic: topic,
|
||||
Partition: partition,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
|
||||
func NewErrNoPartition(topic string, partition int32) *TopicPartitionError {
|
||||
return NewTopicPartitionError(topic, partition, ErrNoPartition)
|
||||
}
|
||||
|
||||
func NewErrNoLeader(topic string, partition int32) *TopicPartitionError {
|
||||
return NewTopicPartitionError(topic, partition, ErrNoLeader)
|
||||
}
|
||||
|
||||
func (e *TopicPartitionError) Error() string {
|
||||
return fmt.Sprintf("%v (topic=%q partition=%d)", e.Err, e.Topic, e.Partition)
|
||||
}
|
||||
|
||||
func (e *TopicPartitionError) Unwrap() error {
|
||||
return e.Err
|
||||
}
|
||||
10
tap/extensions/kafka/go.mod
Normal file
10
tap/extensions/kafka/go.mod
Normal file
@@ -0,0 +1,10 @@
|
||||
module github.com/up9inc/mizu/tap/extensions/kafka
|
||||
|
||||
go 1.16
|
||||
|
||||
require (
|
||||
github.com/segmentio/kafka-go v0.4.17
|
||||
github.com/up9inc/mizu/tap/api v0.0.0
|
||||
)
|
||||
|
||||
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api
|
||||
35
tap/extensions/kafka/go.sum
Normal file
35
tap/extensions/kafka/go.sum
Normal file
@@ -0,0 +1,35 @@
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8/yCZMuEPMUDHG0CW/brkkEp8mzqk2+ODEitlw=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
|
||||
github.com/frankban/quicktest v1.11.3 h1:8sXhOn0uLys67V8EsXLc6eszDs8VXWxL3iRvebPhedY=
|
||||
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
|
||||
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
|
||||
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/go-cmp v0.5.4 h1:L8R9j+yAqZuZjsqh/z+F1NCffTKKLShY6zXTItVIZ8M=
|
||||
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/klauspost/compress v1.9.8 h1:VMAMUUOh+gaxKTMk+zqbjsSjsIcUcL/LF4o63i82QyA=
|
||||
github.com/klauspost/compress v1.9.8/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
|
||||
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible h1:Ix9yFKn1nSPBLFl/yZknTp8TU5G4Ps0JDmguYK6iH1A=
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/segmentio/kafka-go v0.4.17 h1:IyqRstL9KUTDb3kyGPOOa5VffokKWSEzN6geJ92dSDY=
|
||||
github.com/segmentio/kafka-go v0.4.17/go.mod h1:19+Eg7KwrNKy/PFhiIthEPkO8k+ac7/ZYXwYM9Df10w=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I=
|
||||
github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190506204251-e1dfcc566284/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
650
tap/extensions/kafka/helpers.go
Normal file
650
tap/extensions/kafka/helpers.go
Normal file
@@ -0,0 +1,650 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
type KafkaPayload struct {
|
||||
Data interface{}
|
||||
}
|
||||
|
||||
type KafkaPayloader interface {
|
||||
MarshalJSON() ([]byte, error)
|
||||
}
|
||||
|
||||
func (h KafkaPayload) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(h.Data)
|
||||
}
|
||||
|
||||
type KafkaWrapper struct {
|
||||
Method string `json:"method"`
|
||||
Url string `json:"url"`
|
||||
Details interface{} `json:"details"`
|
||||
}
|
||||
|
||||
func representRequestHeader(data map[string]interface{}, rep []interface{}) []interface{} {
|
||||
requestHeader, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "ApiKey",
|
||||
"value": apiNames[int(data["ApiKey"].(float64))],
|
||||
},
|
||||
{
|
||||
"name": "ApiVersion",
|
||||
"value": fmt.Sprintf("%d", int(data["ApiVersion"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Client ID",
|
||||
"value": data["ClientID"].(string),
|
||||
},
|
||||
{
|
||||
"name": "Correlation ID",
|
||||
"value": fmt.Sprintf("%d", int(data["CorrelationID"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Size",
|
||||
"value": fmt.Sprintf("%d", int(data["Size"].(float64))),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Request Header",
|
||||
"data": string(requestHeader),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representResponseHeader(data map[string]interface{}, rep []interface{}) []interface{} {
|
||||
requestHeader, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Correlation ID",
|
||||
"value": fmt.Sprintf("%d", int(data["CorrelationID"].(float64))),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Response Header",
|
||||
"data": string(requestHeader),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representMetadataRequest(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representRequestHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics := ""
|
||||
allowAutoTopicCreation := ""
|
||||
includeClusterAuthorizedOperations := ""
|
||||
includeTopicAuthorizedOperations := ""
|
||||
if payload["Topics"] != nil {
|
||||
x, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
topics = string(x)
|
||||
}
|
||||
if payload["AllowAutoTopicCreation"] != nil {
|
||||
allowAutoTopicCreation = strconv.FormatBool(payload["AllowAutoTopicCreation"].(bool))
|
||||
}
|
||||
if payload["IncludeClusterAuthorizedOperations"] != nil {
|
||||
includeClusterAuthorizedOperations = strconv.FormatBool(payload["IncludeClusterAuthorizedOperations"].(bool))
|
||||
}
|
||||
if payload["IncludeTopicAuthorizedOperations"] != nil {
|
||||
includeTopicAuthorizedOperations = strconv.FormatBool(payload["IncludeTopicAuthorizedOperations"].(bool))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": topics,
|
||||
},
|
||||
{
|
||||
"name": "Allow Auto Topic Creation",
|
||||
"value": allowAutoTopicCreation,
|
||||
},
|
||||
{
|
||||
"name": "Include Cluster Authorized Operations",
|
||||
"value": includeClusterAuthorizedOperations,
|
||||
},
|
||||
{
|
||||
"name": "Include Topic Authorized Operations",
|
||||
"value": includeTopicAuthorizedOperations,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representMetadataResponse(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representResponseHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
brokers, _ := json.Marshal(payload["Brokers"].([]interface{}))
|
||||
controllerID := ""
|
||||
clusterID := ""
|
||||
throttleTimeMs := ""
|
||||
clusterAuthorizedOperations := ""
|
||||
if payload["ControllerID"] != nil {
|
||||
controllerID = fmt.Sprintf("%d", int(payload["ControllerID"].(float64)))
|
||||
}
|
||||
if payload["ClusterID"] != nil {
|
||||
clusterID = payload["ClusterID"].(string)
|
||||
}
|
||||
if payload["ThrottleTimeMs"] != nil {
|
||||
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
|
||||
}
|
||||
if payload["ClusterAuthorizedOperations"] != nil {
|
||||
clusterAuthorizedOperations = fmt.Sprintf("%d", int(payload["ClusterAuthorizedOperations"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Throttle Time (ms)",
|
||||
"value": throttleTimeMs,
|
||||
},
|
||||
{
|
||||
"name": "Brokers",
|
||||
"value": string(brokers),
|
||||
},
|
||||
{
|
||||
"name": "Cluster ID",
|
||||
"value": clusterID,
|
||||
},
|
||||
{
|
||||
"name": "Controller ID",
|
||||
"value": controllerID,
|
||||
},
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": string(topics),
|
||||
},
|
||||
{
|
||||
"name": "Cluster Authorized Operations",
|
||||
"value": clusterAuthorizedOperations,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representApiVersionsRequest(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representRequestHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
clientSoftwareName := ""
|
||||
clientSoftwareVersion := ""
|
||||
if payload["ClientSoftwareName"] != nil {
|
||||
clientSoftwareName = payload["ClientSoftwareName"].(string)
|
||||
}
|
||||
if payload["ClientSoftwareVersion"] != nil {
|
||||
clientSoftwareVersion = payload["ClientSoftwareVersion"].(string)
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Client Software Name",
|
||||
"value": clientSoftwareName,
|
||||
},
|
||||
{
|
||||
"name": "Client Software Version",
|
||||
"value": clientSoftwareVersion,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representApiVersionsResponse(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representResponseHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
apiKeys := ""
|
||||
if payload["TopicNames"] != nil {
|
||||
x, _ := json.Marshal(payload["ApiKeys"].([]interface{}))
|
||||
apiKeys = string(x)
|
||||
}
|
||||
throttleTimeMs := ""
|
||||
if payload["ThrottleTimeMs"] != nil {
|
||||
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Error Code",
|
||||
"value": fmt.Sprintf("%d", int(payload["ErrorCode"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "ApiKeys",
|
||||
"value": apiKeys,
|
||||
},
|
||||
{
|
||||
"name": "Throttle Time (ms)",
|
||||
"value": throttleTimeMs,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representProduceRequest(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representRequestHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topicData := ""
|
||||
_topicData := payload["TopicData"]
|
||||
if _topicData != nil {
|
||||
x, _ := json.Marshal(_topicData.([]interface{}))
|
||||
topicData = string(x)
|
||||
}
|
||||
transactionalID := ""
|
||||
if payload["TransactionalID"] != nil {
|
||||
transactionalID = payload["TransactionalID"].(string)
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Transactional ID",
|
||||
"value": transactionalID,
|
||||
},
|
||||
{
|
||||
"name": "Required Acknowledgements",
|
||||
"value": fmt.Sprintf("%d", int(payload["RequiredAcks"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Timeout",
|
||||
"value": fmt.Sprintf("%d", int(payload["Timeout"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Topic Data",
|
||||
"value": topicData,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representProduceResponse(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representResponseHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
responses, _ := json.Marshal(payload["Responses"].([]interface{}))
|
||||
throttleTimeMs := ""
|
||||
if payload["ThrottleTimeMs"] != nil {
|
||||
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Responses",
|
||||
"value": string(responses),
|
||||
},
|
||||
{
|
||||
"name": "Throttle Time (ms)",
|
||||
"value": throttleTimeMs,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representFetchRequest(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representRequestHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
replicaId := ""
|
||||
if payload["ReplicaId"] != nil {
|
||||
replicaId = fmt.Sprintf("%d", int(payload["ReplicaId"].(float64)))
|
||||
}
|
||||
maxBytes := ""
|
||||
if payload["MaxBytes"] != nil {
|
||||
maxBytes = fmt.Sprintf("%d", int(payload["MaxBytes"].(float64)))
|
||||
}
|
||||
isolationLevel := ""
|
||||
if payload["IsolationLevel"] != nil {
|
||||
isolationLevel = fmt.Sprintf("%d", int(payload["IsolationLevel"].(float64)))
|
||||
}
|
||||
sessionId := ""
|
||||
if payload["SessionId"] != nil {
|
||||
sessionId = fmt.Sprintf("%d", int(payload["SessionId"].(float64)))
|
||||
}
|
||||
sessionEpoch := ""
|
||||
if payload["SessionEpoch"] != nil {
|
||||
sessionEpoch = fmt.Sprintf("%d", int(payload["SessionEpoch"].(float64)))
|
||||
}
|
||||
forgottenTopicsData := ""
|
||||
if payload["ForgottenTopicsData"] != nil {
|
||||
x, _ := json.Marshal(payload["ForgottenTopicsData"].(map[string]interface{}))
|
||||
forgottenTopicsData = string(x)
|
||||
}
|
||||
rackId := ""
|
||||
if payload["RackId"] != nil {
|
||||
rackId = fmt.Sprintf("%d", int(payload["RackId"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Replica ID",
|
||||
"value": replicaId,
|
||||
},
|
||||
{
|
||||
"name": "Maximum Wait (ms)",
|
||||
"value": fmt.Sprintf("%d", int(payload["MaxWaitMs"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Minimum Bytes",
|
||||
"value": fmt.Sprintf("%d", int(payload["MinBytes"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Maximum Bytes",
|
||||
"value": maxBytes,
|
||||
},
|
||||
{
|
||||
"name": "Isolation Level",
|
||||
"value": isolationLevel,
|
||||
},
|
||||
{
|
||||
"name": "Session ID",
|
||||
"value": sessionId,
|
||||
},
|
||||
{
|
||||
"name": "Session Epoch",
|
||||
"value": sessionEpoch,
|
||||
},
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": string(topics),
|
||||
},
|
||||
{
|
||||
"name": "Forgotten Topics Data",
|
||||
"value": forgottenTopicsData,
|
||||
},
|
||||
{
|
||||
"name": "Rack ID",
|
||||
"value": rackId,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representFetchResponse(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representResponseHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
responses, _ := json.Marshal(payload["Responses"].([]interface{}))
|
||||
throttleTimeMs := ""
|
||||
if payload["ThrottleTimeMs"] != nil {
|
||||
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
|
||||
}
|
||||
errorCode := ""
|
||||
if payload["ErrorCode"] != nil {
|
||||
errorCode = fmt.Sprintf("%d", int(payload["ErrorCode"].(float64)))
|
||||
}
|
||||
sessionId := ""
|
||||
if payload["SessionId"] != nil {
|
||||
sessionId = fmt.Sprintf("%d", int(payload["SessionId"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Throttle Time (ms)",
|
||||
"value": throttleTimeMs,
|
||||
},
|
||||
{
|
||||
"name": "Error Code",
|
||||
"value": errorCode,
|
||||
},
|
||||
{
|
||||
"name": "Session ID",
|
||||
"value": sessionId,
|
||||
},
|
||||
{
|
||||
"name": "Responses",
|
||||
"value": string(responses),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representListOffsetsRequest(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representRequestHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Replica ID",
|
||||
"value": fmt.Sprintf("%d", int(payload["ReplicaId"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": string(topics),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representListOffsetsResponse(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representResponseHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
throttleTimeMs := ""
|
||||
if payload["ThrottleTimeMs"] != nil {
|
||||
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Throttle Time (ms)",
|
||||
"value": throttleTimeMs,
|
||||
},
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": string(topics),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representCreateTopicsRequest(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representRequestHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
validateOnly := ""
|
||||
if payload["ValidateOnly"] != nil {
|
||||
validateOnly = strconv.FormatBool(payload["ValidateOnly"].(bool))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": string(topics),
|
||||
},
|
||||
{
|
||||
"name": "Timeout (ms)",
|
||||
"value": fmt.Sprintf("%d", int(payload["TimeoutMs"].(float64))),
|
||||
},
|
||||
{
|
||||
"name": "Validate Only",
|
||||
"value": validateOnly,
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representCreateTopicsResponse(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representResponseHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
throttleTimeMs := ""
|
||||
if payload["ThrottleTimeMs"] != nil {
|
||||
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Throttle Time (ms)",
|
||||
"value": throttleTimeMs,
|
||||
},
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": string(topics),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representDeleteTopicsRequest(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representRequestHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
topics := ""
|
||||
if payload["Topics"] != nil {
|
||||
x, _ := json.Marshal(payload["Topics"].([]interface{}))
|
||||
topics = string(x)
|
||||
}
|
||||
topicNames := ""
|
||||
if payload["TopicNames"] != nil {
|
||||
x, _ := json.Marshal(payload["TopicNames"].([]interface{}))
|
||||
topicNames = string(x)
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "TopicNames",
|
||||
"value": string(topicNames),
|
||||
},
|
||||
{
|
||||
"name": "Topics",
|
||||
"value": string(topics),
|
||||
},
|
||||
{
|
||||
"name": "Timeout (ms)",
|
||||
"value": fmt.Sprintf("%d", int(payload["TimeoutMs"].(float64))),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
|
||||
func representDeleteTopicsResponse(data map[string]interface{}) []interface{} {
|
||||
rep := make([]interface{}, 0)
|
||||
|
||||
rep = representResponseHeader(data, rep)
|
||||
|
||||
payload := data["Payload"].(map[string]interface{})
|
||||
responses, _ := json.Marshal(payload["Responses"].([]interface{}))
|
||||
throttleTimeMs := ""
|
||||
if payload["ThrottleTimeMs"] != nil {
|
||||
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
|
||||
}
|
||||
repPayload, _ := json.Marshal([]map[string]string{
|
||||
{
|
||||
"name": "Throttle Time (ms)",
|
||||
"value": throttleTimeMs,
|
||||
},
|
||||
{
|
||||
"name": "Responses",
|
||||
"value": string(responses),
|
||||
},
|
||||
})
|
||||
rep = append(rep, map[string]string{
|
||||
"type": api.TABLE,
|
||||
"title": "Payload",
|
||||
"data": string(repPayload),
|
||||
})
|
||||
|
||||
return rep
|
||||
}
|
||||
249
tap/extensions/kafka/main.go
Normal file
249
tap/extensions/kafka/main.go
Normal file
@@ -0,0 +1,249 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
var _protocol api.Protocol = api.Protocol{
|
||||
Name: "kafka",
|
||||
LongName: "Apache Kafka Protocol",
|
||||
Abbreviation: "KAFKA",
|
||||
Version: "12",
|
||||
BackgroundColor: "#000000",
|
||||
ForegroundColor: "#ffffff",
|
||||
FontSize: 11,
|
||||
ReferenceLink: "https://kafka.apache.org/protocol",
|
||||
Ports: []string{"9092"},
|
||||
Priority: 2,
|
||||
}
|
||||
|
||||
func init() {
|
||||
log.Println("Initializing Kafka extension...")
|
||||
}
|
||||
|
||||
type dissecting string
|
||||
|
||||
func (d dissecting) Register(extension *api.Extension) {
|
||||
extension.Protocol = &_protocol
|
||||
extension.MatcherMap = reqResMatcher.openMessagesMap
|
||||
}
|
||||
|
||||
func (d dissecting) Ping() {
|
||||
log.Printf("pong %s\n", _protocol.Name)
|
||||
}
|
||||
|
||||
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter) error {
|
||||
for {
|
||||
if superIdentifier.Protocol != nil && superIdentifier.Protocol != &_protocol {
|
||||
return errors.New("Identified by another protocol")
|
||||
}
|
||||
|
||||
if isClient {
|
||||
_, _, err := ReadRequest(b, tcpID, superTimer)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
superIdentifier.Protocol = &_protocol
|
||||
} else {
|
||||
err := ReadResponse(b, tcpID, superTimer, emitter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
superIdentifier.Protocol = &_protocol
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
|
||||
request := item.Pair.Request.Payload.(map[string]interface{})
|
||||
reqDetails := request["details"].(map[string]interface{})
|
||||
service := "kafka"
|
||||
if resolvedDestination != "" {
|
||||
service = resolvedDestination
|
||||
} else if resolvedSource != "" {
|
||||
service = resolvedSource
|
||||
}
|
||||
apiKey := ApiKey(reqDetails["ApiKey"].(float64))
|
||||
|
||||
summary := ""
|
||||
switch apiKey {
|
||||
case Metadata:
|
||||
_topics := reqDetails["Payload"].(map[string]interface{})["Topics"]
|
||||
if _topics == nil {
|
||||
break
|
||||
}
|
||||
topics := _topics.([]interface{})
|
||||
for _, topic := range topics {
|
||||
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Name"].(string))
|
||||
}
|
||||
if len(summary) > 0 {
|
||||
summary = summary[:len(summary)-2]
|
||||
}
|
||||
break
|
||||
case ApiVersions:
|
||||
summary = reqDetails["ClientID"].(string)
|
||||
break
|
||||
case Produce:
|
||||
_topics := reqDetails["Payload"].(map[string]interface{})["TopicData"]
|
||||
if _topics == nil {
|
||||
break
|
||||
}
|
||||
topics := _topics.([]interface{})
|
||||
for _, topic := range topics {
|
||||
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Topic"].(string))
|
||||
}
|
||||
if len(summary) > 0 {
|
||||
summary = summary[:len(summary)-2]
|
||||
}
|
||||
break
|
||||
case Fetch:
|
||||
topics := reqDetails["Payload"].(map[string]interface{})["Topics"].([]interface{})
|
||||
for _, topic := range topics {
|
||||
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Topic"].(string))
|
||||
}
|
||||
if len(summary) > 0 {
|
||||
summary = summary[:len(summary)-2]
|
||||
}
|
||||
break
|
||||
case ListOffsets:
|
||||
topics := reqDetails["Payload"].(map[string]interface{})["Topics"].([]interface{})
|
||||
for _, topic := range topics {
|
||||
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Name"].(string))
|
||||
}
|
||||
if len(summary) > 0 {
|
||||
summary = summary[:len(summary)-2]
|
||||
}
|
||||
break
|
||||
case CreateTopics:
|
||||
topics := reqDetails["Payload"].(map[string]interface{})["Topics"].([]interface{})
|
||||
for _, topic := range topics {
|
||||
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Name"].(string))
|
||||
}
|
||||
if len(summary) > 0 {
|
||||
summary = summary[:len(summary)-2]
|
||||
}
|
||||
break
|
||||
case DeleteTopics:
|
||||
topicNames := reqDetails["TopicNames"].([]string)
|
||||
for _, name := range topicNames {
|
||||
summary += fmt.Sprintf("%s, ", name)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
request["url"] = summary
|
||||
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
|
||||
entryBytes, _ := json.Marshal(item.Pair)
|
||||
return &api.MizuEntry{
|
||||
ProtocolName: _protocol.Name,
|
||||
ProtocolLongName: _protocol.LongName,
|
||||
ProtocolAbbreviation: _protocol.Abbreviation,
|
||||
ProtocolVersion: _protocol.Version,
|
||||
ProtocolBackgroundColor: _protocol.BackgroundColor,
|
||||
ProtocolForegroundColor: _protocol.ForegroundColor,
|
||||
ProtocolFontSize: _protocol.FontSize,
|
||||
ProtocolReferenceLink: _protocol.ReferenceLink,
|
||||
EntryId: entryId,
|
||||
Entry: string(entryBytes),
|
||||
Url: fmt.Sprintf("%s%s", service, summary),
|
||||
Method: apiNames[apiKey],
|
||||
Status: 0,
|
||||
RequestSenderIp: item.ConnectionInfo.ClientIP,
|
||||
Service: service,
|
||||
Timestamp: item.Timestamp,
|
||||
ElapsedTime: elapsedTime,
|
||||
Path: summary,
|
||||
ResolvedSource: resolvedSource,
|
||||
ResolvedDestination: resolvedDestination,
|
||||
SourceIp: item.ConnectionInfo.ClientIP,
|
||||
DestinationIp: item.ConnectionInfo.ServerIP,
|
||||
SourcePort: item.ConnectionInfo.ClientPort,
|
||||
DestinationPort: item.ConnectionInfo.ServerPort,
|
||||
IsOutgoing: item.ConnectionInfo.IsOutgoing,
|
||||
}
|
||||
}
|
||||
|
||||
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
|
||||
return &api.BaseEntryDetails{
|
||||
Id: entry.EntryId,
|
||||
Protocol: _protocol,
|
||||
Url: entry.Url,
|
||||
RequestSenderIp: entry.RequestSenderIp,
|
||||
Service: entry.Service,
|
||||
Summary: entry.Path,
|
||||
StatusCode: entry.Status,
|
||||
Method: entry.Method,
|
||||
Timestamp: entry.Timestamp,
|
||||
SourceIp: entry.SourceIp,
|
||||
DestinationIp: entry.DestinationIp,
|
||||
SourcePort: entry.SourcePort,
|
||||
DestinationPort: entry.DestinationPort,
|
||||
IsOutgoing: entry.IsOutgoing,
|
||||
Latency: 0,
|
||||
Rules: api.ApplicableRules{
|
||||
Latency: 0,
|
||||
Status: false,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
|
||||
p = _protocol
|
||||
bodySize = 0
|
||||
var root map[string]interface{}
|
||||
json.Unmarshal([]byte(entry.Entry), &root)
|
||||
representation := make(map[string]interface{}, 0)
|
||||
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
|
||||
response := root["response"].(map[string]interface{})["payload"].(map[string]interface{})
|
||||
reqDetails := request["details"].(map[string]interface{})
|
||||
resDetails := response["details"].(map[string]interface{})
|
||||
|
||||
apiKey := ApiKey(reqDetails["ApiKey"].(float64))
|
||||
|
||||
var repRequest []interface{}
|
||||
var repResponse []interface{}
|
||||
switch apiKey {
|
||||
case Metadata:
|
||||
repRequest = representMetadataRequest(reqDetails)
|
||||
repResponse = representMetadataResponse(resDetails)
|
||||
break
|
||||
case ApiVersions:
|
||||
repRequest = representApiVersionsRequest(reqDetails)
|
||||
repResponse = representApiVersionsResponse(resDetails)
|
||||
break
|
||||
case Produce:
|
||||
repRequest = representProduceRequest(reqDetails)
|
||||
repResponse = representProduceResponse(resDetails)
|
||||
break
|
||||
case Fetch:
|
||||
repRequest = representFetchRequest(reqDetails)
|
||||
repResponse = representFetchResponse(resDetails)
|
||||
break
|
||||
case ListOffsets:
|
||||
repRequest = representListOffsetsRequest(reqDetails)
|
||||
repResponse = representListOffsetsResponse(resDetails)
|
||||
break
|
||||
case CreateTopics:
|
||||
repRequest = representCreateTopicsRequest(reqDetails)
|
||||
repResponse = representCreateTopicsResponse(resDetails)
|
||||
break
|
||||
case DeleteTopics:
|
||||
repRequest = representDeleteTopicsRequest(reqDetails)
|
||||
repResponse = representDeleteTopicsResponse(resDetails)
|
||||
break
|
||||
}
|
||||
|
||||
representation["request"] = repRequest
|
||||
representation["response"] = repResponse
|
||||
object, err = json.Marshal(representation)
|
||||
return
|
||||
}
|
||||
|
||||
var Dissector dissecting
|
||||
58
tap/extensions/kafka/matcher.go
Normal file
58
tap/extensions/kafka/matcher.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var reqResMatcher = CreateResponseRequestMatcher() // global
|
||||
const maxTry int = 3000
|
||||
|
||||
type RequestResponsePair struct {
|
||||
Request Request
|
||||
Response Response
|
||||
}
|
||||
|
||||
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}::{correlation_id}
|
||||
type requestResponseMatcher struct {
|
||||
openMessagesMap *sync.Map
|
||||
}
|
||||
|
||||
func CreateResponseRequestMatcher() requestResponseMatcher {
|
||||
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
|
||||
return *newMatcher
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) registerRequest(key string, request *Request) *RequestResponsePair {
|
||||
if response, found := matcher.openMessagesMap.LoadAndDelete(key); found {
|
||||
// Check for a situation that only occurs when a Kafka broker is initiating
|
||||
switch response.(type) {
|
||||
case *Response:
|
||||
return matcher.preparePair(request, response.(*Response))
|
||||
}
|
||||
}
|
||||
|
||||
matcher.openMessagesMap.Store(key, request)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) registerResponse(key string, response *Response) *RequestResponsePair {
|
||||
try := 0
|
||||
for {
|
||||
try++
|
||||
if try > maxTry {
|
||||
return nil
|
||||
}
|
||||
if request, found := matcher.openMessagesMap.LoadAndDelete(key); found {
|
||||
return matcher.preparePair(request.(*Request), response)
|
||||
}
|
||||
time.Sleep(1 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) preparePair(request *Request, response *Response) *RequestResponsePair {
|
||||
return &RequestResponsePair{
|
||||
Request: *request,
|
||||
Response: *response,
|
||||
}
|
||||
}
|
||||
480
tap/extensions/kafka/protocol.go
Normal file
480
tap/extensions/kafka/protocol.go
Normal file
@@ -0,0 +1,480 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Message is an interface implemented by all request and response types of the
|
||||
// kafka protocol.
|
||||
//
|
||||
// This interface is used mostly as a safe-guard to provide a compile-time check
|
||||
// for values passed to functions dealing kafka message types.
|
||||
type Message interface {
|
||||
ApiKey() ApiKey
|
||||
}
|
||||
|
||||
type ApiKey int16
|
||||
|
||||
func (k ApiKey) String() string {
|
||||
if i := int(k); i >= 0 && i < len(apiNames) {
|
||||
return apiNames[i]
|
||||
}
|
||||
return strconv.Itoa(int(k))
|
||||
}
|
||||
|
||||
func (k ApiKey) MinVersion() int16 { return k.apiType().minVersion() }
|
||||
|
||||
func (k ApiKey) MaxVersion() int16 { return k.apiType().maxVersion() }
|
||||
|
||||
func (k ApiKey) SelectVersion(minVersion, maxVersion int16) int16 {
|
||||
min := k.MinVersion()
|
||||
max := k.MaxVersion()
|
||||
switch {
|
||||
case min > maxVersion:
|
||||
return min
|
||||
case max < maxVersion:
|
||||
return max
|
||||
default:
|
||||
return maxVersion
|
||||
}
|
||||
}
|
||||
|
||||
func (k ApiKey) apiType() apiType {
|
||||
if i := int(k); i >= 0 && i < len(apiTypes) {
|
||||
return apiTypes[i]
|
||||
}
|
||||
return apiType{}
|
||||
}
|
||||
|
||||
const (
|
||||
Produce ApiKey = 0
|
||||
Fetch ApiKey = 1
|
||||
ListOffsets ApiKey = 2
|
||||
Metadata ApiKey = 3
|
||||
LeaderAndIsr ApiKey = 4
|
||||
StopReplica ApiKey = 5
|
||||
UpdateMetadata ApiKey = 6
|
||||
ControlledShutdown ApiKey = 7
|
||||
OffsetCommit ApiKey = 8
|
||||
OffsetFetch ApiKey = 9
|
||||
FindCoordinator ApiKey = 10
|
||||
JoinGroup ApiKey = 11
|
||||
Heartbeat ApiKey = 12
|
||||
LeaveGroup ApiKey = 13
|
||||
SyncGroup ApiKey = 14
|
||||
DescribeGroups ApiKey = 15
|
||||
ListGroups ApiKey = 16
|
||||
SaslHandshake ApiKey = 17
|
||||
ApiVersions ApiKey = 18
|
||||
CreateTopics ApiKey = 19
|
||||
DeleteTopics ApiKey = 20
|
||||
DeleteRecords ApiKey = 21
|
||||
InitProducerId ApiKey = 22
|
||||
OffsetForLeaderEpoch ApiKey = 23
|
||||
AddPartitionsToTxn ApiKey = 24
|
||||
AddOffsetsToTxn ApiKey = 25
|
||||
EndTxn ApiKey = 26
|
||||
WriteTxnMarkers ApiKey = 27
|
||||
TxnOffsetCommit ApiKey = 28
|
||||
DescribeAcls ApiKey = 29
|
||||
CreateAcls ApiKey = 30
|
||||
DeleteAcls ApiKey = 31
|
||||
DescribeConfigs ApiKey = 32
|
||||
AlterConfigs ApiKey = 33
|
||||
AlterReplicaLogDirs ApiKey = 34
|
||||
DescribeLogDirs ApiKey = 35
|
||||
SaslAuthenticate ApiKey = 36
|
||||
CreatePartitions ApiKey = 37
|
||||
CreateDelegationToken ApiKey = 38
|
||||
RenewDelegationToken ApiKey = 39
|
||||
ExpireDelegationToken ApiKey = 40
|
||||
DescribeDelegationToken ApiKey = 41
|
||||
DeleteGroups ApiKey = 42
|
||||
ElectLeaders ApiKey = 43
|
||||
IncrementalAlterConfigs ApiKey = 44
|
||||
AlterPartitionReassignments ApiKey = 45
|
||||
ListPartitionReassignments ApiKey = 46
|
||||
OffsetDelete ApiKey = 47
|
||||
DescribeClientQuotas ApiKey = 48
|
||||
AlterClientQuotas ApiKey = 49
|
||||
|
||||
numApis = 50
|
||||
)
|
||||
|
||||
var apiNames = [numApis]string{
|
||||
Produce: "Produce",
|
||||
Fetch: "Fetch",
|
||||
ListOffsets: "ListOffsets",
|
||||
Metadata: "Metadata",
|
||||
LeaderAndIsr: "LeaderAndIsr",
|
||||
StopReplica: "StopReplica",
|
||||
UpdateMetadata: "UpdateMetadata",
|
||||
ControlledShutdown: "ControlledShutdown",
|
||||
OffsetCommit: "OffsetCommit",
|
||||
OffsetFetch: "OffsetFetch",
|
||||
FindCoordinator: "FindCoordinator",
|
||||
JoinGroup: "JoinGroup",
|
||||
Heartbeat: "Heartbeat",
|
||||
LeaveGroup: "LeaveGroup",
|
||||
SyncGroup: "SyncGroup",
|
||||
DescribeGroups: "DescribeGroups",
|
||||
ListGroups: "ListGroups",
|
||||
SaslHandshake: "SaslHandshake",
|
||||
ApiVersions: "ApiVersions",
|
||||
CreateTopics: "CreateTopics",
|
||||
DeleteTopics: "DeleteTopics",
|
||||
DeleteRecords: "DeleteRecords",
|
||||
InitProducerId: "InitProducerId",
|
||||
OffsetForLeaderEpoch: "OffsetForLeaderEpoch",
|
||||
AddPartitionsToTxn: "AddPartitionsToTxn",
|
||||
AddOffsetsToTxn: "AddOffsetsToTxn",
|
||||
EndTxn: "EndTxn",
|
||||
WriteTxnMarkers: "WriteTxnMarkers",
|
||||
TxnOffsetCommit: "TxnOffsetCommit",
|
||||
DescribeAcls: "DescribeAcls",
|
||||
CreateAcls: "CreateAcls",
|
||||
DeleteAcls: "DeleteAcls",
|
||||
DescribeConfigs: "DescribeConfigs",
|
||||
AlterConfigs: "AlterConfigs",
|
||||
AlterReplicaLogDirs: "AlterReplicaLogDirs",
|
||||
DescribeLogDirs: "DescribeLogDirs",
|
||||
SaslAuthenticate: "SaslAuthenticate",
|
||||
CreatePartitions: "CreatePartitions",
|
||||
CreateDelegationToken: "CreateDelegationToken",
|
||||
RenewDelegationToken: "RenewDelegationToken",
|
||||
ExpireDelegationToken: "ExpireDelegationToken",
|
||||
DescribeDelegationToken: "DescribeDelegationToken",
|
||||
DeleteGroups: "DeleteGroups",
|
||||
ElectLeaders: "ElectLeaders",
|
||||
IncrementalAlterConfigs: "IncrementalAlterConfigs",
|
||||
AlterPartitionReassignments: "AlterPartitionReassignments",
|
||||
ListPartitionReassignments: "ListPartitionReassignments",
|
||||
OffsetDelete: "OffsetDelete",
|
||||
DescribeClientQuotas: "DescribeClientQuotas",
|
||||
AlterClientQuotas: "AlterClientQuotas",
|
||||
}
|
||||
|
||||
type messageType struct {
|
||||
version int16
|
||||
flexible bool
|
||||
gotype reflect.Type
|
||||
decode decodeFunc
|
||||
encode encodeFunc
|
||||
}
|
||||
|
||||
func (t *messageType) new() Message {
|
||||
return reflect.New(t.gotype).Interface().(Message)
|
||||
}
|
||||
|
||||
type apiType struct {
|
||||
requests []messageType
|
||||
responses []messageType
|
||||
}
|
||||
|
||||
func (t apiType) minVersion() int16 {
|
||||
if len(t.requests) == 0 {
|
||||
return 0
|
||||
}
|
||||
return t.requests[0].version
|
||||
}
|
||||
|
||||
func (t apiType) maxVersion() int16 {
|
||||
if len(t.requests) == 0 {
|
||||
return 0
|
||||
}
|
||||
return t.requests[len(t.requests)-1].version
|
||||
}
|
||||
|
||||
var apiTypes [numApis]apiType
|
||||
|
||||
// Register is automatically called by sub-packages are imported to install a
|
||||
// new pair of request/response message types.
|
||||
func Register(req, res Message) {
|
||||
k1 := req.ApiKey()
|
||||
k2 := res.ApiKey()
|
||||
|
||||
if k1 != k2 {
|
||||
panic(fmt.Sprintf("[%T/%T]: request and response API keys mismatch: %d != %d", req, res, k1, k2))
|
||||
}
|
||||
|
||||
apiTypes[k1] = apiType{
|
||||
requests: typesOf(req),
|
||||
responses: typesOf(res),
|
||||
}
|
||||
}
|
||||
|
||||
func typesOf(v interface{}) []messageType {
|
||||
return makeTypes(reflect.TypeOf(v).Elem())
|
||||
}
|
||||
|
||||
func makeTypes(t reflect.Type) []messageType {
|
||||
minVersion := int16(-1)
|
||||
maxVersion := int16(-1)
|
||||
|
||||
// All future versions will be flexible (according to spec), so don't need to
|
||||
// worry about maxes here.
|
||||
minFlexibleVersion := int16(-1)
|
||||
|
||||
forEachStructField(t, func(_ reflect.Type, _ index, tag string) {
|
||||
forEachStructTag(tag, func(tag structTag) bool {
|
||||
if minVersion < 0 || tag.MinVersion < minVersion {
|
||||
minVersion = tag.MinVersion
|
||||
}
|
||||
if maxVersion < 0 || tag.MaxVersion > maxVersion {
|
||||
maxVersion = tag.MaxVersion
|
||||
}
|
||||
if tag.TagID > -2 && (minFlexibleVersion < 0 || tag.MinVersion < minFlexibleVersion) {
|
||||
minFlexibleVersion = tag.MinVersion
|
||||
}
|
||||
return true
|
||||
})
|
||||
})
|
||||
|
||||
types := make([]messageType, 0, (maxVersion-minVersion)+1)
|
||||
|
||||
for v := minVersion; v <= maxVersion; v++ {
|
||||
flexible := minFlexibleVersion >= 0 && v >= minFlexibleVersion
|
||||
|
||||
types = append(types, messageType{
|
||||
version: v,
|
||||
gotype: t,
|
||||
flexible: flexible,
|
||||
decode: decodeFuncOf(t, v, flexible, structTag{}),
|
||||
encode: encodeFuncOf(t, v, flexible, structTag{}),
|
||||
})
|
||||
}
|
||||
|
||||
return types
|
||||
}
|
||||
|
||||
type structTag struct {
|
||||
MinVersion int16
|
||||
MaxVersion int16
|
||||
Compact bool
|
||||
Nullable bool
|
||||
TagID int
|
||||
}
|
||||
|
||||
func forEachStructTag(tag string, do func(structTag) bool) {
|
||||
if tag == "-" {
|
||||
return // special case to ignore the field
|
||||
}
|
||||
|
||||
forEach(tag, '|', func(s string) bool {
|
||||
tag := structTag{
|
||||
MinVersion: -1,
|
||||
MaxVersion: -1,
|
||||
|
||||
// Legitimate tag IDs can start at 0. We use -1 as a placeholder to indicate
|
||||
// that the message type is flexible, so that leaves -2 as the default for
|
||||
// indicating that there is no tag ID and the message is not flexible.
|
||||
TagID: -2,
|
||||
}
|
||||
|
||||
var err error
|
||||
forEach(s, ',', func(s string) bool {
|
||||
switch {
|
||||
case strings.HasPrefix(s, "min="):
|
||||
tag.MinVersion, err = parseVersion(s[4:])
|
||||
case strings.HasPrefix(s, "max="):
|
||||
tag.MaxVersion, err = parseVersion(s[4:])
|
||||
case s == "tag":
|
||||
tag.TagID = -1
|
||||
case strings.HasPrefix(s, "tag="):
|
||||
tag.TagID, err = strconv.Atoi(s[4:])
|
||||
case s == "compact":
|
||||
tag.Compact = true
|
||||
case s == "nullable":
|
||||
tag.Nullable = true
|
||||
default:
|
||||
err = fmt.Errorf("unrecognized option: %q", s)
|
||||
}
|
||||
return err == nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("malformed struct tag: %w", err))
|
||||
}
|
||||
|
||||
if tag.MinVersion < 0 && tag.MaxVersion >= 0 {
|
||||
panic(fmt.Errorf("missing minimum version in struct tag: %q", s))
|
||||
}
|
||||
|
||||
if tag.MaxVersion < 0 && tag.MinVersion >= 0 {
|
||||
panic(fmt.Errorf("missing maximum version in struct tag: %q", s))
|
||||
}
|
||||
|
||||
if tag.MinVersion > tag.MaxVersion {
|
||||
panic(fmt.Errorf("invalid version range in struct tag: %q", s))
|
||||
}
|
||||
|
||||
return do(tag)
|
||||
})
|
||||
}
|
||||
|
||||
func forEach(s string, sep byte, do func(string) bool) bool {
|
||||
for len(s) != 0 {
|
||||
p := ""
|
||||
i := strings.IndexByte(s, sep)
|
||||
if i < 0 {
|
||||
p, s = s, ""
|
||||
} else {
|
||||
p, s = s[:i], s[i+1:]
|
||||
}
|
||||
if !do(p) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func forEachStructField(t reflect.Type, do func(reflect.Type, index, string)) {
|
||||
for i, n := 0, t.NumField(); i < n; i++ {
|
||||
f := t.Field(i)
|
||||
|
||||
if f.PkgPath != "" && f.Name != "_" {
|
||||
continue
|
||||
}
|
||||
|
||||
kafkaTag, ok := f.Tag.Lookup("kafka")
|
||||
if !ok {
|
||||
kafkaTag = "|"
|
||||
}
|
||||
|
||||
do(f.Type, indexOf(f), kafkaTag)
|
||||
}
|
||||
}
|
||||
|
||||
func parseVersion(s string) (int16, error) {
|
||||
if !strings.HasPrefix(s, "v") {
|
||||
return 0, fmt.Errorf("invalid version number: %q", s)
|
||||
}
|
||||
i, err := strconv.ParseInt(s[1:], 10, 16)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("invalid version number: %q: %w", s, err)
|
||||
}
|
||||
if i < 0 {
|
||||
return 0, fmt.Errorf("invalid negative version number: %q", s)
|
||||
}
|
||||
return int16(i), nil
|
||||
}
|
||||
|
||||
func dontExpectEOF(err error) error {
|
||||
switch err {
|
||||
case nil:
|
||||
return nil
|
||||
case io.EOF:
|
||||
return io.ErrUnexpectedEOF
|
||||
default:
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
type Broker struct {
|
||||
ID int32
|
||||
Host string
|
||||
Port int32
|
||||
Rack string
|
||||
}
|
||||
|
||||
func (b Broker) String() string {
|
||||
return net.JoinHostPort(b.Host, itoa(b.Port))
|
||||
}
|
||||
|
||||
func (b Broker) Format(w fmt.State, v rune) {
|
||||
switch v {
|
||||
case 'd':
|
||||
io.WriteString(w, itoa(b.ID))
|
||||
case 's':
|
||||
io.WriteString(w, b.String())
|
||||
case 'v':
|
||||
io.WriteString(w, itoa(b.ID))
|
||||
io.WriteString(w, " ")
|
||||
io.WriteString(w, b.String())
|
||||
if b.Rack != "" {
|
||||
io.WriteString(w, " ")
|
||||
io.WriteString(w, b.Rack)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func itoa(i int32) string {
|
||||
return strconv.Itoa(int(i))
|
||||
}
|
||||
|
||||
type Topic struct {
|
||||
Name string
|
||||
Error int16
|
||||
Partitions map[int32]Partition
|
||||
}
|
||||
|
||||
type Partition struct {
|
||||
ID int32
|
||||
Error int16
|
||||
Leader int32
|
||||
Replicas []int32
|
||||
ISR []int32
|
||||
Offline []int32
|
||||
}
|
||||
|
||||
// BrokerMessage is an extension of the Message interface implemented by some
|
||||
// request types to customize the broker assignment logic.
|
||||
type BrokerMessage interface {
|
||||
// Given a representation of the kafka cluster state as argument, returns
|
||||
// the broker that the message should be routed to.
|
||||
Broker(Cluster) (Broker, error)
|
||||
}
|
||||
|
||||
// GroupMessage is an extension of the Message interface implemented by some
|
||||
// request types to inform the program that they should be routed to a group
|
||||
// coordinator.
|
||||
type GroupMessage interface {
|
||||
// Returns the group configured on the message.
|
||||
Group() string
|
||||
}
|
||||
|
||||
// PreparedMessage is an extension of the Message interface implemented by some
|
||||
// request types which may need to run some pre-processing on their state before
|
||||
// being sent.
|
||||
type PreparedMessage interface {
|
||||
// Prepares the message before being sent to a kafka broker using the API
|
||||
// version passed as argument.
|
||||
Prepare(apiVersion int16)
|
||||
}
|
||||
|
||||
// Splitter is an interface implemented by messages that can be split into
|
||||
// multiple requests and have their results merged back by a Merger.
|
||||
type Splitter interface {
|
||||
// For a given cluster layout, returns the list of messages constructed
|
||||
// from the receiver for each requests that should be sent to the cluster.
|
||||
// The second return value is a Merger which can be used to merge back the
|
||||
// results of each request into a single message (or an error).
|
||||
Split(Cluster) ([]Message, Merger, error)
|
||||
}
|
||||
|
||||
// Merger is an interface implemented by messages which can merge multiple
|
||||
// results into one response.
|
||||
type Merger interface {
|
||||
// Given a list of message and associated results, merge them back into a
|
||||
// response (or an error). The results must be either Message or error
|
||||
// values, other types should trigger a panic.
|
||||
Merge(messages []Message, results []interface{}) (Message, error)
|
||||
}
|
||||
|
||||
// Result converts r to a Message or and error, or panics if r could be be
|
||||
// converted to these types.
|
||||
func Result(r interface{}) (Message, error) {
|
||||
switch v := r.(type) {
|
||||
case Message:
|
||||
return v, nil
|
||||
case error:
|
||||
return nil, v
|
||||
default:
|
||||
panic(fmt.Errorf("BUG: result must be a message or an error but not %T", v))
|
||||
}
|
||||
}
|
||||
219
tap/extensions/kafka/protocol_make.go
Normal file
219
tap/extensions/kafka/protocol_make.go
Normal file
@@ -0,0 +1,219 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type ApiVersion struct {
|
||||
ApiKey int16
|
||||
MinVersion int16
|
||||
MaxVersion int16
|
||||
}
|
||||
|
||||
func (v ApiVersion) Format(w fmt.State, r rune) {
|
||||
switch r {
|
||||
case 's':
|
||||
fmt.Fprint(w, apiKey(v.ApiKey))
|
||||
case 'd':
|
||||
switch {
|
||||
case w.Flag('-'):
|
||||
fmt.Fprint(w, v.MinVersion)
|
||||
case w.Flag('+'):
|
||||
fmt.Fprint(w, v.MaxVersion)
|
||||
default:
|
||||
fmt.Fprint(w, v.ApiKey)
|
||||
}
|
||||
case 'v':
|
||||
switch {
|
||||
case w.Flag('-'):
|
||||
fmt.Fprintf(w, "v%d", v.MinVersion)
|
||||
case w.Flag('+'):
|
||||
fmt.Fprintf(w, "v%d", v.MaxVersion)
|
||||
case w.Flag('#'):
|
||||
fmt.Fprintf(w, "kafka.ApiVersion{ApiKey:%d MinVersion:%d MaxVersion:%d}", v.ApiKey, v.MinVersion, v.MaxVersion)
|
||||
default:
|
||||
fmt.Fprintf(w, "%s[v%d:v%d]", apiKey(v.ApiKey), v.MinVersion, v.MaxVersion)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type apiKey int16
|
||||
|
||||
const (
|
||||
produce apiKey = 0
|
||||
fetch apiKey = 1
|
||||
listOffsets apiKey = 2
|
||||
metadata apiKey = 3
|
||||
leaderAndIsr apiKey = 4
|
||||
stopReplica apiKey = 5
|
||||
updateMetadata apiKey = 6
|
||||
controlledShutdown apiKey = 7
|
||||
offsetCommit apiKey = 8
|
||||
offsetFetch apiKey = 9
|
||||
findCoordinator apiKey = 10
|
||||
joinGroup apiKey = 11
|
||||
heartbeat apiKey = 12
|
||||
leaveGroup apiKey = 13
|
||||
syncGroup apiKey = 14
|
||||
describeGroups apiKey = 15
|
||||
listGroups apiKey = 16
|
||||
saslHandshake apiKey = 17
|
||||
apiVersions apiKey = 18
|
||||
createTopics apiKey = 19
|
||||
deleteTopics apiKey = 20
|
||||
deleteRecords apiKey = 21
|
||||
initProducerId apiKey = 22
|
||||
offsetForLeaderEpoch apiKey = 23
|
||||
addPartitionsToTxn apiKey = 24
|
||||
addOffsetsToTxn apiKey = 25
|
||||
endTxn apiKey = 26
|
||||
writeTxnMarkers apiKey = 27
|
||||
txnOffsetCommit apiKey = 28
|
||||
describeAcls apiKey = 29
|
||||
createAcls apiKey = 30
|
||||
deleteAcls apiKey = 31
|
||||
describeConfigs apiKey = 32
|
||||
alterConfigs apiKey = 33
|
||||
alterReplicaLogDirs apiKey = 34
|
||||
describeLogDirs apiKey = 35
|
||||
saslAuthenticate apiKey = 36
|
||||
createPartitions apiKey = 37
|
||||
createDelegationToken apiKey = 38
|
||||
renewDelegationToken apiKey = 39
|
||||
expireDelegationToken apiKey = 40
|
||||
describeDelegationToken apiKey = 41
|
||||
deleteGroups apiKey = 42
|
||||
electLeaders apiKey = 43
|
||||
incrementalAlterConfigs apiKey = 44
|
||||
alterPartitionReassignments apiKey = 45
|
||||
listPartitionReassignments apiKey = 46
|
||||
offsetDelete apiKey = 47
|
||||
)
|
||||
|
||||
func (k apiKey) String() string {
|
||||
if i := int(k); i >= 0 && i < len(apiKeyStrings) {
|
||||
return apiKeyStrings[i]
|
||||
}
|
||||
return strconv.Itoa(int(k))
|
||||
}
|
||||
|
||||
type apiVersion int16
|
||||
|
||||
const (
|
||||
v0 = 0
|
||||
v1 = 1
|
||||
v2 = 2
|
||||
v3 = 3
|
||||
v4 = 4
|
||||
v5 = 5
|
||||
v6 = 6
|
||||
v7 = 7
|
||||
v8 = 8
|
||||
v9 = 9
|
||||
v10 = 10
|
||||
)
|
||||
|
||||
var apiKeyStrings = [...]string{
|
||||
produce: "Produce",
|
||||
fetch: "Fetch",
|
||||
listOffsets: "ListOffsets",
|
||||
metadata: "Metadata",
|
||||
leaderAndIsr: "LeaderAndIsr",
|
||||
stopReplica: "StopReplica",
|
||||
updateMetadata: "UpdateMetadata",
|
||||
controlledShutdown: "ControlledShutdown",
|
||||
offsetCommit: "OffsetCommit",
|
||||
offsetFetch: "OffsetFetch",
|
||||
findCoordinator: "FindCoordinator",
|
||||
joinGroup: "JoinGroup",
|
||||
heartbeat: "Heartbeat",
|
||||
leaveGroup: "LeaveGroup",
|
||||
syncGroup: "SyncGroup",
|
||||
describeGroups: "DescribeGroups",
|
||||
listGroups: "ListGroups",
|
||||
saslHandshake: "SaslHandshake",
|
||||
apiVersions: "ApiVersions",
|
||||
createTopics: "CreateTopics",
|
||||
deleteTopics: "DeleteTopics",
|
||||
deleteRecords: "DeleteRecords",
|
||||
initProducerId: "InitProducerId",
|
||||
offsetForLeaderEpoch: "OffsetForLeaderEpoch",
|
||||
addPartitionsToTxn: "AddPartitionsToTxn",
|
||||
addOffsetsToTxn: "AddOffsetsToTxn",
|
||||
endTxn: "EndTxn",
|
||||
writeTxnMarkers: "WriteTxnMarkers",
|
||||
txnOffsetCommit: "TxnOffsetCommit",
|
||||
describeAcls: "DescribeAcls",
|
||||
createAcls: "CreateAcls",
|
||||
deleteAcls: "DeleteAcls",
|
||||
describeConfigs: "DescribeConfigs",
|
||||
alterConfigs: "AlterConfigs",
|
||||
alterReplicaLogDirs: "AlterReplicaLogDirs",
|
||||
describeLogDirs: "DescribeLogDirs",
|
||||
saslAuthenticate: "SaslAuthenticate",
|
||||
createPartitions: "CreatePartitions",
|
||||
createDelegationToken: "CreateDelegationToken",
|
||||
renewDelegationToken: "RenewDelegationToken",
|
||||
expireDelegationToken: "ExpireDelegationToken",
|
||||
describeDelegationToken: "DescribeDelegationToken",
|
||||
deleteGroups: "DeleteGroups",
|
||||
electLeaders: "ElectLeaders",
|
||||
incrementalAlterConfigs: "IncrementalAlfterConfigs",
|
||||
alterPartitionReassignments: "AlterPartitionReassignments",
|
||||
listPartitionReassignments: "ListPartitionReassignments",
|
||||
offsetDelete: "OffsetDelete",
|
||||
}
|
||||
|
||||
type requestHeader struct {
|
||||
Size int32
|
||||
ApiKey int16
|
||||
ApiVersion int16
|
||||
CorrelationID int32
|
||||
ClientID string
|
||||
}
|
||||
|
||||
func sizeofString(s string) int32 {
|
||||
return 2 + int32(len(s))
|
||||
}
|
||||
|
||||
func (h requestHeader) size() int32 {
|
||||
return 4 + 2 + 2 + 4 + sizeofString(h.ClientID)
|
||||
}
|
||||
|
||||
// func (h requestHeader) writeTo(wb *writeBuffer) {
|
||||
// wb.writeInt32(h.Size)
|
||||
// wb.writeInt16(h.ApiKey)
|
||||
// wb.writeInt16(h.ApiVersion)
|
||||
// wb.writeInt32(h.CorrelationID)
|
||||
// wb.writeString(h.ClientID)
|
||||
// }
|
||||
|
||||
type request interface {
|
||||
size() int32
|
||||
// writable
|
||||
}
|
||||
|
||||
func makeInt8(b []byte) int8 {
|
||||
return int8(b[0])
|
||||
}
|
||||
|
||||
func makeInt16(b []byte) int16 {
|
||||
return int16(binary.BigEndian.Uint16(b))
|
||||
}
|
||||
|
||||
func makeInt32(b []byte) int32 {
|
||||
return int32(binary.BigEndian.Uint32(b))
|
||||
}
|
||||
|
||||
func makeInt64(b []byte) int64 {
|
||||
return int64(binary.BigEndian.Uint64(b))
|
||||
}
|
||||
|
||||
func expectZeroSize(sz int, err error) error {
|
||||
if err == nil && sz != 0 {
|
||||
err = fmt.Errorf("reading a response left %d unread bytes", sz)
|
||||
}
|
||||
return err
|
||||
}
|
||||
639
tap/extensions/kafka/read.go
Normal file
639
tap/extensions/kafka/read.go
Normal file
@@ -0,0 +1,639 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
type readable interface {
|
||||
readFrom(*bufio.Reader, int) (int, error)
|
||||
}
|
||||
|
||||
var errShortRead = errors.New("not enough bytes available to load the response")
|
||||
|
||||
func peekRead(r *bufio.Reader, sz int, n int, f func([]byte)) (int, error) {
|
||||
if n > sz {
|
||||
return sz, errShortRead
|
||||
}
|
||||
b, err := r.Peek(n)
|
||||
if err != nil {
|
||||
return sz, err
|
||||
}
|
||||
f(b)
|
||||
return discardN(r, sz, n)
|
||||
}
|
||||
|
||||
func readInt8(r *bufio.Reader, sz int, v *int8) (int, error) {
|
||||
return peekRead(r, sz, 1, func(b []byte) { *v = makeInt8(b) })
|
||||
}
|
||||
|
||||
func readInt16(r *bufio.Reader, sz int, v *int16) (int, error) {
|
||||
return peekRead(r, sz, 2, func(b []byte) { *v = makeInt16(b) })
|
||||
}
|
||||
|
||||
func readInt32(r *bufio.Reader, sz int, v *int32) (int, error) {
|
||||
return peekRead(r, sz, 4, func(b []byte) { *v = makeInt32(b) })
|
||||
}
|
||||
|
||||
func readInt64(r *bufio.Reader, sz int, v *int64) (int, error) {
|
||||
return peekRead(r, sz, 8, func(b []byte) { *v = makeInt64(b) })
|
||||
}
|
||||
|
||||
func readVarInt(r *bufio.Reader, sz int, v *int64) (remain int, err error) {
|
||||
// Optimistically assume that most of the time, there will be data buffered
|
||||
// in the reader. If this is not the case, the buffer will be refilled after
|
||||
// consuming zero bytes from the input.
|
||||
input, _ := r.Peek(r.Buffered())
|
||||
x := uint64(0)
|
||||
s := uint(0)
|
||||
|
||||
for {
|
||||
if len(input) > sz {
|
||||
input = input[:sz]
|
||||
}
|
||||
|
||||
for i, b := range input {
|
||||
if b < 0x80 {
|
||||
x |= uint64(b) << s
|
||||
*v = int64(x>>1) ^ -(int64(x) & 1)
|
||||
n, err := r.Discard(i + 1)
|
||||
return sz - n, err
|
||||
}
|
||||
|
||||
x |= uint64(b&0x7f) << s
|
||||
s += 7
|
||||
}
|
||||
|
||||
// Make room in the input buffer to load more data from the underlying
|
||||
// stream. The x and s variables are left untouched, ensuring that the
|
||||
// varint decoding can continue on the next loop iteration.
|
||||
n, _ := r.Discard(len(input))
|
||||
sz -= n
|
||||
if sz == 0 {
|
||||
return 0, errShortRead
|
||||
}
|
||||
|
||||
// Fill the buffer: ask for one more byte, but in practice the reader
|
||||
// will load way more from the underlying stream.
|
||||
if _, err := r.Peek(1); err != nil {
|
||||
if err == io.EOF {
|
||||
err = errShortRead
|
||||
}
|
||||
return sz, err
|
||||
}
|
||||
|
||||
// Grab as many bytes as possible from the buffer, then go on to the
|
||||
// next loop iteration which is going to consume it.
|
||||
input, _ = r.Peek(r.Buffered())
|
||||
}
|
||||
}
|
||||
|
||||
func readBool(r *bufio.Reader, sz int, v *bool) (int, error) {
|
||||
return peekRead(r, sz, 1, func(b []byte) { *v = b[0] != 0 })
|
||||
}
|
||||
|
||||
func readString(r *bufio.Reader, sz int, v *string) (int, error) {
|
||||
return readStringWith(r, sz, func(r *bufio.Reader, sz int, n int) (remain int, err error) {
|
||||
*v, remain, err = readNewString(r, sz, n)
|
||||
return
|
||||
})
|
||||
}
|
||||
|
||||
func readStringWith(r *bufio.Reader, sz int, cb func(*bufio.Reader, int, int) (int, error)) (int, error) {
|
||||
var err error
|
||||
var len int16
|
||||
|
||||
if sz, err = readInt16(r, sz, &len); err != nil {
|
||||
return sz, err
|
||||
}
|
||||
|
||||
n := int(len)
|
||||
if n > sz {
|
||||
return sz, errShortRead
|
||||
}
|
||||
|
||||
return cb(r, sz, n)
|
||||
}
|
||||
|
||||
func readNewString(r *bufio.Reader, sz int, n int) (string, int, error) {
|
||||
b, sz, err := readNewBytes(r, sz, n)
|
||||
return string(b), sz, err
|
||||
}
|
||||
|
||||
func readBytes(r *bufio.Reader, sz int, v *[]byte) (int, error) {
|
||||
return readBytesWith(r, sz, func(r *bufio.Reader, sz int, n int) (remain int, err error) {
|
||||
*v, remain, err = readNewBytes(r, sz, n)
|
||||
return
|
||||
})
|
||||
}
|
||||
|
||||
func readBytesWith(r *bufio.Reader, sz int, cb func(*bufio.Reader, int, int) (int, error)) (int, error) {
|
||||
var err error
|
||||
var n int
|
||||
|
||||
if sz, err = readArrayLen(r, sz, &n); err != nil {
|
||||
return sz, err
|
||||
}
|
||||
|
||||
if n > sz {
|
||||
return sz, errShortRead
|
||||
}
|
||||
|
||||
return cb(r, sz, n)
|
||||
}
|
||||
|
||||
func readNewBytes(r *bufio.Reader, sz int, n int) ([]byte, int, error) {
|
||||
var err error
|
||||
var b []byte
|
||||
var shortRead bool
|
||||
|
||||
if n > 0 {
|
||||
if sz < n {
|
||||
n = sz
|
||||
shortRead = true
|
||||
}
|
||||
|
||||
b = make([]byte, n)
|
||||
n, err = io.ReadFull(r, b)
|
||||
b = b[:n]
|
||||
sz -= n
|
||||
|
||||
if err == nil && shortRead {
|
||||
err = errShortRead
|
||||
}
|
||||
}
|
||||
|
||||
return b, sz, err
|
||||
}
|
||||
|
||||
func readArrayLen(r *bufio.Reader, sz int, n *int) (int, error) {
|
||||
var err error
|
||||
var len int32
|
||||
if sz, err = readInt32(r, sz, &len); err != nil {
|
||||
return sz, err
|
||||
}
|
||||
*n = int(len)
|
||||
return sz, nil
|
||||
}
|
||||
|
||||
func readArrayWith(r *bufio.Reader, sz int, cb func(*bufio.Reader, int) (int, error)) (int, error) {
|
||||
var err error
|
||||
var len int32
|
||||
|
||||
if sz, err = readInt32(r, sz, &len); err != nil {
|
||||
return sz, err
|
||||
}
|
||||
|
||||
for n := int(len); n > 0; n-- {
|
||||
if sz, err = cb(r, sz); err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return sz, err
|
||||
}
|
||||
|
||||
func readStringArray(r *bufio.Reader, sz int, v *[]string) (remain int, err error) {
|
||||
var content []string
|
||||
fn := func(r *bufio.Reader, size int) (fnRemain int, fnErr error) {
|
||||
var value string
|
||||
if fnRemain, fnErr = readString(r, size, &value); fnErr != nil {
|
||||
return
|
||||
}
|
||||
content = append(content, value)
|
||||
return
|
||||
}
|
||||
if remain, err = readArrayWith(r, sz, fn); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
*v = content
|
||||
return
|
||||
}
|
||||
|
||||
func readMapStringInt32(r *bufio.Reader, sz int, v *map[string][]int32) (remain int, err error) {
|
||||
var len int32
|
||||
if remain, err = readInt32(r, sz, &len); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
content := make(map[string][]int32, len)
|
||||
for i := 0; i < int(len); i++ {
|
||||
var key string
|
||||
var values []int32
|
||||
|
||||
if remain, err = readString(r, remain, &key); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
fn := func(r *bufio.Reader, size int) (fnRemain int, fnErr error) {
|
||||
var value int32
|
||||
if fnRemain, fnErr = readInt32(r, size, &value); fnErr != nil {
|
||||
return
|
||||
}
|
||||
values = append(values, value)
|
||||
return
|
||||
}
|
||||
if remain, err = readArrayWith(r, remain, fn); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
content[key] = values
|
||||
}
|
||||
*v = content
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func read(r *bufio.Reader, sz int, a interface{}) (int, error) {
|
||||
switch v := a.(type) {
|
||||
case *int8:
|
||||
return readInt8(r, sz, v)
|
||||
case *int16:
|
||||
return readInt16(r, sz, v)
|
||||
case *int32:
|
||||
return readInt32(r, sz, v)
|
||||
case *int64:
|
||||
return readInt64(r, sz, v)
|
||||
case *bool:
|
||||
return readBool(r, sz, v)
|
||||
case *string:
|
||||
return readString(r, sz, v)
|
||||
case *[]byte:
|
||||
return readBytes(r, sz, v)
|
||||
}
|
||||
switch v := reflect.ValueOf(a).Elem(); v.Kind() {
|
||||
case reflect.Struct:
|
||||
return readStruct(r, sz, v)
|
||||
case reflect.Slice:
|
||||
return readSlice(r, sz, v)
|
||||
default:
|
||||
panic(fmt.Sprintf("unsupported type: %T", a))
|
||||
}
|
||||
}
|
||||
|
||||
func ReadAll(r *bufio.Reader, sz int, ptrs ...interface{}) (int, error) {
|
||||
var err error
|
||||
|
||||
for _, ptr := range ptrs {
|
||||
if sz, err = readPtr(r, sz, ptr); err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return sz, err
|
||||
}
|
||||
|
||||
func readPtr(r *bufio.Reader, sz int, ptr interface{}) (int, error) {
|
||||
switch v := ptr.(type) {
|
||||
case *int8:
|
||||
return readInt8(r, sz, v)
|
||||
case *int16:
|
||||
return readInt16(r, sz, v)
|
||||
case *int32:
|
||||
return readInt32(r, sz, v)
|
||||
case *int64:
|
||||
return readInt64(r, sz, v)
|
||||
case *string:
|
||||
return readString(r, sz, v)
|
||||
case *[]byte:
|
||||
return readBytes(r, sz, v)
|
||||
case readable:
|
||||
return v.readFrom(r, sz)
|
||||
default:
|
||||
panic(fmt.Sprintf("unsupported type: %T", v))
|
||||
}
|
||||
}
|
||||
|
||||
func readStruct(r *bufio.Reader, sz int, v reflect.Value) (int, error) {
|
||||
var err error
|
||||
for i, n := 0, v.NumField(); i != n; i++ {
|
||||
if sz, err = read(r, sz, v.Field(i).Addr().Interface()); err != nil {
|
||||
return sz, err
|
||||
}
|
||||
}
|
||||
return sz, nil
|
||||
}
|
||||
|
||||
func readSlice(r *bufio.Reader, sz int, v reflect.Value) (int, error) {
|
||||
var err error
|
||||
var len int32
|
||||
|
||||
if sz, err = readInt32(r, sz, &len); err != nil {
|
||||
return sz, err
|
||||
}
|
||||
|
||||
if n := int(len); n < 0 {
|
||||
v.Set(reflect.Zero(v.Type()))
|
||||
} else {
|
||||
v.Set(reflect.MakeSlice(v.Type(), n, n))
|
||||
|
||||
for i := 0; i != n; i++ {
|
||||
if sz, err = read(r, sz, v.Index(i).Addr().Interface()); err != nil {
|
||||
return sz, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return sz, nil
|
||||
}
|
||||
|
||||
func readFetchResponseHeaderV2(r *bufio.Reader, size int) (throttle int32, watermark int64, remain int, err error) {
|
||||
var n int32
|
||||
var p struct {
|
||||
Partition int32
|
||||
ErrorCode int16
|
||||
HighwaterMarkOffset int64
|
||||
MessageSetSize int32
|
||||
}
|
||||
|
||||
if remain, err = readInt32(r, size, &throttle); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt32(r, remain, &n); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if n != 1 {
|
||||
err = fmt.Errorf("1 kafka topic was expected in the fetch response but the client received %d", n)
|
||||
return
|
||||
}
|
||||
|
||||
// We ignore the topic name because we've requests messages for a single
|
||||
// topic, unless there's a bug in the kafka server we will have received
|
||||
// the name of the topic that we requested.
|
||||
if remain, err = discardString(r, remain); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt32(r, remain, &n); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if n != 1 {
|
||||
err = fmt.Errorf("1 kafka partition was expected in the fetch response but the client received %d", n)
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = read(r, remain, &p); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if p.ErrorCode != 0 {
|
||||
err = Error(p.ErrorCode)
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if remain != int(p.MessageSetSize) {
|
||||
err = fmt.Errorf("the size of the message set in a fetch response doesn't match the number of remaining bytes (message set size = %d, remaining bytes = %d)", p.MessageSetSize, remain)
|
||||
return
|
||||
}
|
||||
|
||||
watermark = p.HighwaterMarkOffset
|
||||
return
|
||||
}
|
||||
|
||||
func readFetchResponseHeaderV5(r *bufio.Reader, size int) (throttle int32, watermark int64, remain int, err error) {
|
||||
var n int32
|
||||
type AbortedTransaction struct {
|
||||
ProducerId int64
|
||||
FirstOffset int64
|
||||
}
|
||||
var p struct {
|
||||
Partition int32
|
||||
ErrorCode int16
|
||||
HighwaterMarkOffset int64
|
||||
LastStableOffset int64
|
||||
LogStartOffset int64
|
||||
}
|
||||
var messageSetSize int32
|
||||
var abortedTransactions []AbortedTransaction
|
||||
|
||||
if remain, err = readInt32(r, size, &throttle); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt32(r, remain, &n); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if n != 1 {
|
||||
err = fmt.Errorf("1 kafka topic was expected in the fetch response but the client received %d", n)
|
||||
return
|
||||
}
|
||||
|
||||
// We ignore the topic name because we've requests messages for a single
|
||||
// topic, unless there's a bug in the kafka server we will have received
|
||||
// the name of the topic that we requested.
|
||||
if remain, err = discardString(r, remain); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt32(r, remain, &n); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if n != 1 {
|
||||
err = fmt.Errorf("1 kafka partition was expected in the fetch response but the client received %d", n)
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = read(r, remain, &p); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var abortedTransactionLen int
|
||||
if remain, err = readArrayLen(r, remain, &abortedTransactionLen); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if abortedTransactionLen == -1 {
|
||||
abortedTransactions = nil
|
||||
} else {
|
||||
abortedTransactions = make([]AbortedTransaction, abortedTransactionLen)
|
||||
for i := 0; i < abortedTransactionLen; i++ {
|
||||
if remain, err = read(r, remain, &abortedTransactions[i]); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if p.ErrorCode != 0 {
|
||||
err = Error(p.ErrorCode)
|
||||
return
|
||||
}
|
||||
|
||||
remain, err = readInt32(r, remain, &messageSetSize)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if remain != int(messageSetSize) {
|
||||
err = fmt.Errorf("the size of the message set in a fetch response doesn't match the number of remaining bytes (message set size = %d, remaining bytes = %d)", messageSetSize, remain)
|
||||
return
|
||||
}
|
||||
|
||||
watermark = p.HighwaterMarkOffset
|
||||
return
|
||||
|
||||
}
|
||||
|
||||
func readFetchResponseHeaderV10(r *bufio.Reader, size int) (throttle int32, watermark int64, remain int, err error) {
|
||||
var n int32
|
||||
var errorCode int16
|
||||
type AbortedTransaction struct {
|
||||
ProducerId int64
|
||||
FirstOffset int64
|
||||
}
|
||||
var p struct {
|
||||
Partition int32
|
||||
ErrorCode int16
|
||||
HighwaterMarkOffset int64
|
||||
LastStableOffset int64
|
||||
LogStartOffset int64
|
||||
}
|
||||
var messageSetSize int32
|
||||
var abortedTransactions []AbortedTransaction
|
||||
|
||||
if remain, err = readInt32(r, size, &throttle); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt16(r, remain, &errorCode); err != nil {
|
||||
return
|
||||
}
|
||||
if errorCode != 0 {
|
||||
err = Error(errorCode)
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = discardInt32(r, remain); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt32(r, remain, &n); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if n != 1 {
|
||||
err = fmt.Errorf("1 kafka topic was expected in the fetch response but the client received %d", n)
|
||||
return
|
||||
}
|
||||
|
||||
// We ignore the topic name because we've requests messages for a single
|
||||
// topic, unless there's a bug in the kafka server we will have received
|
||||
// the name of the topic that we requested.
|
||||
if remain, err = discardString(r, remain); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt32(r, remain, &n); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if n != 1 {
|
||||
err = fmt.Errorf("1 kafka partition was expected in the fetch response but the client received %d", n)
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = read(r, remain, &p); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var abortedTransactionLen int
|
||||
if remain, err = readArrayLen(r, remain, &abortedTransactionLen); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if abortedTransactionLen == -1 {
|
||||
abortedTransactions = nil
|
||||
} else {
|
||||
abortedTransactions = make([]AbortedTransaction, abortedTransactionLen)
|
||||
for i := 0; i < abortedTransactionLen; i++ {
|
||||
if remain, err = read(r, remain, &abortedTransactions[i]); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if p.ErrorCode != 0 {
|
||||
err = Error(p.ErrorCode)
|
||||
return
|
||||
}
|
||||
|
||||
remain, err = readInt32(r, remain, &messageSetSize)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// This error should never trigger, unless there's a bug in the kafka client
|
||||
// or server.
|
||||
if remain != int(messageSetSize) {
|
||||
err = fmt.Errorf("the size of the message set in a fetch response doesn't match the number of remaining bytes (message set size = %d, remaining bytes = %d)", messageSetSize, remain)
|
||||
return
|
||||
}
|
||||
|
||||
watermark = p.HighwaterMarkOffset
|
||||
return
|
||||
|
||||
}
|
||||
|
||||
func readMessageHeader(r *bufio.Reader, sz int) (offset int64, attributes int8, timestamp int64, remain int, err error) {
|
||||
var version int8
|
||||
|
||||
if remain, err = readInt64(r, sz, &offset); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// On discarding the message size and CRC:
|
||||
// ---------------------------------------
|
||||
//
|
||||
// - Not sure why kafka gives the message size here, we already have the
|
||||
// number of remaining bytes in the response and kafka should only truncate
|
||||
// the trailing message.
|
||||
//
|
||||
// - TCP is already taking care of ensuring data integrity, no need to
|
||||
// waste resources doing it a second time so we just skip the message CRC.
|
||||
//
|
||||
if remain, err = discardN(r, remain, 8); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt8(r, remain, &version); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if remain, err = readInt8(r, remain, &attributes); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch version {
|
||||
case 0:
|
||||
case 1:
|
||||
remain, err = readInt64(r, remain, ×tamp)
|
||||
default:
|
||||
err = fmt.Errorf("unsupported message version %d found in fetch response", version)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
314
tap/extensions/kafka/record.go
Normal file
314
tap/extensions/kafka/record.go
Normal file
@@ -0,0 +1,314 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
"github.com/segmentio/kafka-go/compress"
|
||||
)
|
||||
|
||||
// Attributes is a bitset representing special attributes set on records.
|
||||
type Attributes int16
|
||||
|
||||
const (
|
||||
Gzip Attributes = Attributes(compress.Gzip) // 1
|
||||
Snappy Attributes = Attributes(compress.Snappy) // 2
|
||||
Lz4 Attributes = Attributes(compress.Lz4) // 3
|
||||
Zstd Attributes = Attributes(compress.Zstd) // 4
|
||||
Transactional Attributes = 1 << 4
|
||||
Control Attributes = 1 << 5
|
||||
)
|
||||
|
||||
func (a Attributes) Compression() compress.Compression {
|
||||
return compress.Compression(a & 7)
|
||||
}
|
||||
|
||||
func (a Attributes) Transactional() bool {
|
||||
return (a & Transactional) != 0
|
||||
}
|
||||
|
||||
func (a Attributes) Control() bool {
|
||||
return (a & Control) != 0
|
||||
}
|
||||
|
||||
func (a Attributes) String() string {
|
||||
s := a.Compression().String()
|
||||
if a.Transactional() {
|
||||
s += "+transactional"
|
||||
}
|
||||
if a.Control() {
|
||||
s += "+control"
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Header represents a single entry in a list of record headers.
|
||||
type Header struct {
|
||||
Key string
|
||||
Value []byte
|
||||
}
|
||||
|
||||
// Record is an interface representing a single kafka record.
|
||||
//
|
||||
// Record values are not safe to use concurrently from multiple goroutines.
|
||||
type Record struct {
|
||||
// The offset at which the record exists in a topic partition. This value
|
||||
// is ignored in produce requests.
|
||||
Offset int64
|
||||
|
||||
// Returns the time of the record. This value may be omitted in produce
|
||||
// requests to let kafka set the time when it saves the record.
|
||||
Time time.Time
|
||||
|
||||
// Returns a byte sequence containing the key of this record. The returned
|
||||
// sequence may be nil to indicate that the record has no key. If the record
|
||||
// is part of a RecordSet, the content of the key must remain valid at least
|
||||
// until the record set is closed (or until the key is closed).
|
||||
Key Bytes
|
||||
|
||||
// Returns a byte sequence containing the value of this record. The returned
|
||||
// sequence may be nil to indicate that the record has no value. If the
|
||||
// record is part of a RecordSet, the content of the value must remain valid
|
||||
// at least until the record set is closed (or until the value is closed).
|
||||
Value Bytes
|
||||
|
||||
// Returns the list of headers associated with this record. The returned
|
||||
// slice may be reused across calls, the program should use it as an
|
||||
// immutable value.
|
||||
Headers []Header
|
||||
}
|
||||
|
||||
// RecordSet represents a sequence of records in Produce requests and Fetch
|
||||
// responses. All v0, v1, and v2 formats are supported.
|
||||
type RecordSet struct {
|
||||
// The message version that this record set will be represented as, valid
|
||||
// values are 1, or 2.
|
||||
//
|
||||
// When reading, this is the value of the highest version used in the
|
||||
// batches that compose the record set.
|
||||
//
|
||||
// When writing, this value dictates the format that the records will be
|
||||
// encoded in.
|
||||
Version int8
|
||||
|
||||
// Attributes set on the record set.
|
||||
//
|
||||
// When reading, the attributes are the combination of all attributes in
|
||||
// the batches that compose the record set.
|
||||
//
|
||||
// When writing, the attributes apply to the whole sequence of records in
|
||||
// the set.
|
||||
Attributes Attributes
|
||||
|
||||
// A reader exposing the sequence of records.
|
||||
//
|
||||
// When reading a RecordSet from an io.Reader, the Records field will be a
|
||||
// *RecordStream. If the program needs to access the details of each batch
|
||||
// that compose the stream, it may use type assertions to access the
|
||||
// underlying types of each batch.
|
||||
Records RecordReader
|
||||
}
|
||||
|
||||
// bufferedReader is an interface implemented by types like bufio.Reader, which
|
||||
// we use to optimize prefix reads by accessing the internal buffer directly
|
||||
// through calls to Peek.
|
||||
type bufferedReader interface {
|
||||
Discard(int) (int, error)
|
||||
Peek(int) ([]byte, error)
|
||||
}
|
||||
|
||||
// bytesBuffer is an interface implemented by types like bytes.Buffer, which we
|
||||
// use to optimize prefix reads by accessing the internal buffer directly
|
||||
// through calls to Bytes.
|
||||
type bytesBuffer interface {
|
||||
Bytes() []byte
|
||||
}
|
||||
|
||||
// magicByteOffset is the position of the magic byte in all versions of record
|
||||
// sets in the kafka protocol.
|
||||
const magicByteOffset = 16
|
||||
|
||||
// ReadFrom reads the representation of a record set from r into rs, returning
|
||||
// the number of bytes consumed from r, and an non-nil error if the record set
|
||||
// could not be read.
|
||||
func (rs *RecordSet) ReadFrom(r io.Reader) (int64, error) {
|
||||
// d, _ := r.(*decoder)
|
||||
// if d == nil {
|
||||
// d = &decoder{
|
||||
// reader: r,
|
||||
// remain: 4,
|
||||
// }
|
||||
// }
|
||||
|
||||
// *rs = RecordSet{}
|
||||
// limit := d.remain
|
||||
// size := d.readInt32()
|
||||
|
||||
// if d.err != nil {
|
||||
// return int64(limit - d.remain), d.err
|
||||
// }
|
||||
|
||||
// if size <= 0 {
|
||||
// return 4, nil
|
||||
// }
|
||||
|
||||
// stream := &RecordStream{
|
||||
// Records: make([]RecordReader, 0, 4),
|
||||
// }
|
||||
|
||||
// var err error
|
||||
// d.remain = int(size)
|
||||
|
||||
// for d.remain > 0 && err == nil {
|
||||
// var version byte
|
||||
|
||||
// if d.remain < (magicByteOffset + 1) {
|
||||
// if len(stream.Records) != 0 {
|
||||
// break
|
||||
// }
|
||||
// return 4, fmt.Errorf("impossible record set shorter than %d bytes", magicByteOffset+1)
|
||||
// }
|
||||
|
||||
// switch r := d.reader.(type) {
|
||||
// case bufferedReader:
|
||||
// b, err := r.Peek(magicByteOffset + 1)
|
||||
// if err != nil {
|
||||
// n, _ := r.Discard(len(b))
|
||||
// return 4 + int64(n), dontExpectEOF(err)
|
||||
// }
|
||||
// version = b[magicByteOffset]
|
||||
// case bytesBuffer:
|
||||
// version = r.Bytes()[magicByteOffset]
|
||||
// default:
|
||||
// b := make([]byte, magicByteOffset+1)
|
||||
// if n, err := io.ReadFull(d.reader, b); err != nil {
|
||||
// return 4 + int64(n), dontExpectEOF(err)
|
||||
// }
|
||||
// version = b[magicByteOffset]
|
||||
// // Reconstruct the prefix that we had to read to determine the version
|
||||
// // of the record set from the magic byte.
|
||||
// //
|
||||
// // Technically this may recurisvely stack readers when consuming all
|
||||
// // items of the batch, which could hurt performance. In practice this
|
||||
// // path should not be taken tho, since the decoder would read from a
|
||||
// // *bufio.Reader which implements the bufferedReader interface.
|
||||
// d.reader = io.MultiReader(bytes.NewReader(b), d.reader)
|
||||
// }
|
||||
|
||||
// var tmp RecordSet
|
||||
// switch version {
|
||||
// case 0, 1:
|
||||
// err = tmp.readFromVersion1(d)
|
||||
// case 2:
|
||||
// err = tmp.readFromVersion2(d)
|
||||
// default:
|
||||
// err = fmt.Errorf("unsupported message version %d for message of size %d", version, size)
|
||||
// }
|
||||
|
||||
// if tmp.Version > rs.Version {
|
||||
// rs.Version = tmp.Version
|
||||
// }
|
||||
|
||||
// rs.Attributes |= tmp.Attributes
|
||||
|
||||
// if tmp.Records != nil {
|
||||
// stream.Records = append(stream.Records, tmp.Records)
|
||||
// }
|
||||
// }
|
||||
|
||||
// if len(stream.Records) != 0 {
|
||||
// rs.Records = stream
|
||||
// // Ignore errors if we've successfully read records, so the
|
||||
// // program can keep making progress.
|
||||
// err = nil
|
||||
// }
|
||||
|
||||
// d.discardAll()
|
||||
// rn := 4 + (int(size) - d.remain)
|
||||
// d.remain = limit - rn
|
||||
// return int64(rn), err
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
// WriteTo writes the representation of rs into w. The value of rs.Version
|
||||
// dictates which format that the record set will be represented as.
|
||||
//
|
||||
// The error will be ErrNoRecord if rs contained no records.
|
||||
//
|
||||
// Note: since this package is only compatible with kafka 0.10 and above, the
|
||||
// method never produces messages in version 0. If rs.Version is zero, the
|
||||
// method defaults to producing messages in version 1.
|
||||
func (rs *RecordSet) WriteTo(w io.Writer) (int64, error) {
|
||||
// if rs.Records == nil {
|
||||
// return 0, ErrNoRecord
|
||||
// }
|
||||
|
||||
// // This optimization avoids rendering the record set in an intermediary
|
||||
// // buffer when the writer is already a pageBuffer, which is a common case
|
||||
// // due to the way WriteRequest and WriteResponse are implemented.
|
||||
// buffer, _ := w.(*pageBuffer)
|
||||
// bufferOffset := int64(0)
|
||||
|
||||
// if buffer != nil {
|
||||
// bufferOffset = buffer.Size()
|
||||
// } else {
|
||||
// buffer = newPageBuffer()
|
||||
// defer buffer.unref()
|
||||
// }
|
||||
|
||||
// size := packUint32(0)
|
||||
// buffer.Write(size[:]) // size placeholder
|
||||
|
||||
// var err error
|
||||
// switch rs.Version {
|
||||
// case 0, 1:
|
||||
// err = rs.writeToVersion1(buffer, bufferOffset+4)
|
||||
// case 2:
|
||||
// err = rs.writeToVersion2(buffer, bufferOffset+4)
|
||||
// default:
|
||||
// err = fmt.Errorf("unsupported record set version %d", rs.Version)
|
||||
// }
|
||||
// if err != nil {
|
||||
// return 0, err
|
||||
// }
|
||||
|
||||
// n := buffer.Size() - bufferOffset
|
||||
// if n == 0 {
|
||||
// size = packUint32(^uint32(0))
|
||||
// } else {
|
||||
// size = packUint32(uint32(n) - 4)
|
||||
// }
|
||||
// buffer.WriteAt(size[:], bufferOffset)
|
||||
|
||||
// // This condition indicates that the output writer received by `WriteTo` was
|
||||
// // not a *pageBuffer, in which case we need to flush the buffered records
|
||||
// // data into it.
|
||||
// if buffer != w {
|
||||
// return buffer.WriteTo(w)
|
||||
// }
|
||||
|
||||
// return n, nil
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func makeTime(t int64) time.Time {
|
||||
return time.Unix(t/1000, (t%1000)*int64(time.Millisecond))
|
||||
}
|
||||
|
||||
func timestamp(t time.Time) int64 {
|
||||
if t.IsZero() {
|
||||
return 0
|
||||
}
|
||||
return t.UnixNano() / int64(time.Millisecond)
|
||||
}
|
||||
|
||||
func packUint32(u uint32) (b [4]byte) {
|
||||
binary.BigEndian.PutUint32(b[:], u)
|
||||
return
|
||||
}
|
||||
|
||||
func packUint64(u uint64) (b [8]byte) {
|
||||
binary.BigEndian.PutUint64(b[:], u)
|
||||
return
|
||||
}
|
||||
43
tap/extensions/kafka/record_bytes.go
Normal file
43
tap/extensions/kafka/record_bytes.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/segmentio/kafka-go/protocol"
|
||||
)
|
||||
|
||||
// Header is a key/value pair type representing headers set on records.
|
||||
// type Header = protocol.Header
|
||||
|
||||
// Bytes is an interface representing a sequence of bytes. This abstraction
|
||||
// makes it possible for programs to inject data into produce requests without
|
||||
// having to load in into an intermediary buffer, or read record keys and values
|
||||
// from a fetch response directly from internal buffers.
|
||||
//
|
||||
// Bytes are not safe to use concurrently from multiple goroutines.
|
||||
// type Bytes = protocol.Bytes
|
||||
|
||||
// NewBytes constructs a Bytes value from a byte slice.
|
||||
//
|
||||
// If b is nil, nil is returned.
|
||||
// func NewBytes(b []byte) Bytes { return protocol.NewBytes(b) }
|
||||
|
||||
// ReadAll reads b into a byte slice.
|
||||
// func ReadAll(b Bytes) ([]byte, error) { return protocol.ReadAll(b) }
|
||||
|
||||
// Record is an interface representing a single kafka record.
|
||||
//
|
||||
// Record values are not safe to use concurrently from multiple goroutines.
|
||||
// type Record = protocol.Record
|
||||
|
||||
// RecordReader is an interface representing a sequence of records. Record sets
|
||||
// are used in both produce and fetch requests to represent the sequence of
|
||||
// records that are sent to or receive from kafka brokers.
|
||||
//
|
||||
// RecordReader values are not safe to use concurrently from multiple goroutines.
|
||||
type RecordReader = protocol.RecordReader
|
||||
|
||||
// NewRecordReade rconstructs a RecordSet which exposes the sequence of records
|
||||
// passed as arguments.
|
||||
func NewRecordReader(records ...Record) RecordReader {
|
||||
// return protocol.NewRecordReader(records...)
|
||||
return nil
|
||||
}
|
||||
101
tap/extensions/kafka/reflect.go
Normal file
101
tap/extensions/kafka/reflect.go
Normal file
@@ -0,0 +1,101 @@
|
||||
// +build !unsafe
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
)
|
||||
|
||||
type index []int
|
||||
|
||||
type _type struct{ typ reflect.Type }
|
||||
|
||||
func typeOf(x interface{}) _type {
|
||||
return makeType(reflect.TypeOf(x))
|
||||
}
|
||||
|
||||
func elemTypeOf(x interface{}) _type {
|
||||
return makeType(reflect.TypeOf(x).Elem())
|
||||
}
|
||||
|
||||
func makeType(t reflect.Type) _type {
|
||||
return _type{typ: t}
|
||||
}
|
||||
|
||||
type value struct {
|
||||
val reflect.Value
|
||||
}
|
||||
|
||||
func nonAddressableValueOf(x interface{}) value {
|
||||
return value{val: reflect.ValueOf(x)}
|
||||
}
|
||||
|
||||
func valueOf(x interface{}) value {
|
||||
return value{val: reflect.ValueOf(x).Elem()}
|
||||
}
|
||||
|
||||
func makeValue(t reflect.Type) value {
|
||||
return value{val: reflect.New(t).Elem()}
|
||||
}
|
||||
|
||||
func (v value) bool() bool { return v.val.Bool() }
|
||||
|
||||
func (v value) int8() int8 { return int8(v.int64()) }
|
||||
|
||||
func (v value) int16() int16 { return int16(v.int64()) }
|
||||
|
||||
func (v value) int32() int32 { return int32(v.int64()) }
|
||||
|
||||
func (v value) int64() int64 { return v.val.Int() }
|
||||
|
||||
func (v value) string() string { return v.val.String() }
|
||||
|
||||
func (v value) bytes() []byte { return v.val.Bytes() }
|
||||
|
||||
func (v value) iface(t reflect.Type) interface{} { return v.val.Addr().Interface() }
|
||||
|
||||
func (v value) array(t reflect.Type) array { return array{val: v.val} }
|
||||
|
||||
func (v value) setBool(b bool) { v.val.SetBool(b) }
|
||||
|
||||
func (v value) setInt8(i int8) { v.setInt64(int64(i)) }
|
||||
|
||||
func (v value) setInt16(i int16) { v.setInt64(int64(i)) }
|
||||
|
||||
func (v value) setInt32(i int32) { v.setInt64(int64(i)) }
|
||||
|
||||
func (v value) setInt64(i int64) { v.val.SetInt(i) }
|
||||
|
||||
func (v value) setString(s string) { v.val.SetString(s) }
|
||||
|
||||
func (v value) setBytes(b []byte) { v.val.SetBytes(b) }
|
||||
|
||||
func (v value) setArray(a array) {
|
||||
if a.val.IsValid() {
|
||||
v.val.Set(a.val)
|
||||
} else {
|
||||
v.val.Set(reflect.Zero(v.val.Type()))
|
||||
}
|
||||
}
|
||||
|
||||
func (v value) fieldByIndex(i index) value {
|
||||
return value{val: v.val.FieldByIndex(i)}
|
||||
}
|
||||
|
||||
type array struct {
|
||||
val reflect.Value
|
||||
}
|
||||
|
||||
func makeArray(t reflect.Type, n int) array {
|
||||
return array{val: reflect.MakeSlice(reflect.SliceOf(t), n, n)}
|
||||
}
|
||||
|
||||
func (a array) index(i int) value { return value{val: a.val.Index(i)} }
|
||||
|
||||
func (a array) length() int { return a.val.Len() }
|
||||
|
||||
func (a array) isNil() bool { return a.val.IsNil() }
|
||||
|
||||
func indexOf(s reflect.StructField) index { return index(s.Index) }
|
||||
|
||||
func bytesToString(b []byte) string { return string(b) }
|
||||
293
tap/extensions/kafka/request.go
Normal file
293
tap/extensions/kafka/request.go
Normal file
@@ -0,0 +1,293 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
type Request struct {
|
||||
Size int32
|
||||
ApiKey ApiKey
|
||||
ApiVersion int16
|
||||
CorrelationID int32
|
||||
ClientID string
|
||||
Payload interface{}
|
||||
CaptureTime time.Time
|
||||
}
|
||||
|
||||
func ReadRequest(r io.Reader, tcpID *api.TcpID, superTimer *api.SuperTimer) (apiKey ApiKey, apiVersion int16, err error) {
|
||||
d := &decoder{reader: r, remain: 4}
|
||||
size := d.readInt32()
|
||||
|
||||
if size > 1000000 {
|
||||
return 0, 0, fmt.Errorf("A Kafka message cannot be bigger than 1MB")
|
||||
}
|
||||
|
||||
if size < 8 {
|
||||
return 0, 0, fmt.Errorf("A Kafka request header cannot be smaller than 8 bytes")
|
||||
}
|
||||
|
||||
if err = d.err; err != nil {
|
||||
err = dontExpectEOF(err)
|
||||
return 0, 0, err
|
||||
}
|
||||
|
||||
d.remain = int(size)
|
||||
apiKey = ApiKey(d.readInt16())
|
||||
apiVersion = d.readInt16()
|
||||
correlationID := d.readInt32()
|
||||
clientID := d.readString()
|
||||
|
||||
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
|
||||
err = fmt.Errorf("unsupported api key: %d", i)
|
||||
return apiKey, apiVersion, err
|
||||
}
|
||||
|
||||
if err = d.err; err != nil {
|
||||
err = dontExpectEOF(err)
|
||||
return apiKey, apiVersion, err
|
||||
}
|
||||
|
||||
t := &apiTypes[apiKey]
|
||||
if t == nil {
|
||||
err = fmt.Errorf("unsupported api: %s", apiNames[apiKey])
|
||||
return apiKey, apiVersion, err
|
||||
}
|
||||
|
||||
var payload interface{}
|
||||
|
||||
switch apiKey {
|
||||
case Metadata:
|
||||
var mt interface{}
|
||||
var metadataRequest interface{}
|
||||
if apiVersion >= 11 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataRequestV11{}).Elem())
|
||||
mt = types[0]
|
||||
metadataRequest = &MetadataRequestV11{}
|
||||
} else if apiVersion >= 10 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataRequestV10{}).Elem())
|
||||
mt = types[0]
|
||||
metadataRequest = &MetadataRequestV10{}
|
||||
} else if apiVersion >= 8 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataRequestV8{}).Elem())
|
||||
mt = types[0]
|
||||
metadataRequest = &MetadataRequestV8{}
|
||||
} else if apiVersion >= 4 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataRequestV4{}).Elem())
|
||||
mt = types[0]
|
||||
metadataRequest = &MetadataRequestV4{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataRequestV0{}).Elem())
|
||||
mt = types[0]
|
||||
metadataRequest = &MetadataRequestV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(metadataRequest))
|
||||
payload = metadataRequest
|
||||
break
|
||||
case ApiVersions:
|
||||
var mt interface{}
|
||||
var apiVersionsRequest interface{}
|
||||
if apiVersion >= 3 {
|
||||
types := makeTypes(reflect.TypeOf(&ApiVersionsRequestV3{}).Elem())
|
||||
mt = types[0]
|
||||
apiVersionsRequest = &ApiVersionsRequestV3{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&ApiVersionsRequestV0{}).Elem())
|
||||
mt = types[0]
|
||||
apiVersionsRequest = &ApiVersionsRequestV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(apiVersionsRequest))
|
||||
payload = apiVersionsRequest
|
||||
break
|
||||
case Produce:
|
||||
var mt interface{}
|
||||
var produceRequest interface{}
|
||||
if apiVersion >= 3 {
|
||||
types := makeTypes(reflect.TypeOf(&ProduceRequestV3{}).Elem())
|
||||
mt = types[0]
|
||||
produceRequest = &ProduceRequestV3{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&ProduceRequestV0{}).Elem())
|
||||
mt = types[0]
|
||||
produceRequest = &ProduceRequestV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(produceRequest))
|
||||
payload = produceRequest
|
||||
break
|
||||
case Fetch:
|
||||
var mt interface{}
|
||||
var fetchRequest interface{}
|
||||
if apiVersion >= 11 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchRequestV11{}).Elem())
|
||||
mt = types[0]
|
||||
fetchRequest = &FetchRequestV11{}
|
||||
} else if apiVersion >= 9 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchRequestV9{}).Elem())
|
||||
mt = types[0]
|
||||
fetchRequest = &FetchRequestV9{}
|
||||
} else if apiVersion >= 7 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchRequestV7{}).Elem())
|
||||
mt = types[0]
|
||||
fetchRequest = &FetchRequestV7{}
|
||||
} else if apiVersion >= 5 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchRequestV5{}).Elem())
|
||||
mt = types[0]
|
||||
fetchRequest = &FetchRequestV5{}
|
||||
} else if apiVersion >= 4 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchRequestV4{}).Elem())
|
||||
mt = types[0]
|
||||
fetchRequest = &FetchRequestV4{}
|
||||
} else if apiVersion >= 3 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchRequestV3{}).Elem())
|
||||
mt = types[0]
|
||||
fetchRequest = &FetchRequestV3{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&FetchRequestV0{}).Elem())
|
||||
mt = types[0]
|
||||
fetchRequest = &FetchRequestV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(fetchRequest))
|
||||
payload = fetchRequest
|
||||
break
|
||||
case ListOffsets:
|
||||
var mt interface{}
|
||||
var listOffsetsRequest interface{}
|
||||
if apiVersion >= 4 {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV4{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsRequest = &ListOffsetsRequestV4{}
|
||||
} else if apiVersion >= 2 {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV2{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsRequest = &ListOffsetsRequestV2{}
|
||||
} else if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV1{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsRequest = &ListOffsetsRequestV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV0{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsRequest = &ListOffsetsRequestV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(listOffsetsRequest))
|
||||
payload = listOffsetsRequest
|
||||
break
|
||||
case CreateTopics:
|
||||
var mt interface{}
|
||||
var createTopicsRequest interface{}
|
||||
if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&CreateTopicsRequestV1{}).Elem())
|
||||
mt = types[0]
|
||||
createTopicsRequest = &CreateTopicsRequestV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&CreateTopicsRequestV0{}).Elem())
|
||||
mt = types[0]
|
||||
createTopicsRequest = &CreateTopicsRequestV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(createTopicsRequest))
|
||||
payload = createTopicsRequest
|
||||
break
|
||||
case DeleteTopics:
|
||||
var mt interface{}
|
||||
var deleteTopicsRequest interface{}
|
||||
if apiVersion >= 6 {
|
||||
types := makeTypes(reflect.TypeOf(&DeleteTopicsRequestV6{}).Elem())
|
||||
mt = types[0]
|
||||
deleteTopicsRequest = &DeleteTopicsRequestV6{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&DeleteTopicsRequestV0{}).Elem())
|
||||
mt = types[0]
|
||||
deleteTopicsRequest = &DeleteTopicsRequestV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(deleteTopicsRequest))
|
||||
payload = deleteTopicsRequest
|
||||
default:
|
||||
return apiKey, 0, fmt.Errorf("(Request) Not implemented: %s", apiKey)
|
||||
}
|
||||
|
||||
request := &Request{
|
||||
Size: size,
|
||||
ApiKey: apiKey,
|
||||
ApiVersion: apiVersion,
|
||||
CorrelationID: correlationID,
|
||||
ClientID: clientID,
|
||||
CaptureTime: superTimer.CaptureTime,
|
||||
Payload: payload,
|
||||
}
|
||||
|
||||
key := fmt.Sprintf(
|
||||
"%s:%s->%s:%s::%d",
|
||||
tcpID.SrcIP,
|
||||
tcpID.SrcPort,
|
||||
tcpID.DstIP,
|
||||
tcpID.DstPort,
|
||||
correlationID,
|
||||
)
|
||||
reqResMatcher.registerRequest(key, request)
|
||||
|
||||
d.discardAll()
|
||||
|
||||
return apiKey, apiVersion, nil
|
||||
}
|
||||
|
||||
func WriteRequest(w io.Writer, apiVersion int16, correlationID int32, clientID string, msg Message) error {
|
||||
apiKey := msg.ApiKey()
|
||||
|
||||
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
|
||||
return fmt.Errorf("unsupported api key: %d", i)
|
||||
}
|
||||
|
||||
t := &apiTypes[apiKey]
|
||||
if t == nil {
|
||||
return fmt.Errorf("unsupported api: %s", apiNames[apiKey])
|
||||
}
|
||||
|
||||
minVersion := t.minVersion()
|
||||
maxVersion := t.maxVersion()
|
||||
|
||||
if apiVersion < minVersion || apiVersion > maxVersion {
|
||||
return fmt.Errorf("unsupported %s version: v%d not in range v%d-v%d", apiKey, apiVersion, minVersion, maxVersion)
|
||||
}
|
||||
|
||||
r := &t.requests[apiVersion-minVersion]
|
||||
v := valueOf(msg)
|
||||
b := newPageBuffer()
|
||||
defer b.unref()
|
||||
|
||||
e := &encoder{writer: b}
|
||||
e.writeInt32(0) // placeholder for the request size
|
||||
e.writeInt16(int16(apiKey))
|
||||
e.writeInt16(apiVersion)
|
||||
e.writeInt32(correlationID)
|
||||
|
||||
if r.flexible {
|
||||
// Flexible messages use a nullable string for the client ID, then extra space for a
|
||||
// tag buffer, which begins with a size value. Since we're not writing any fields into the
|
||||
// latter, we can just write zero for now.
|
||||
//
|
||||
// See
|
||||
// https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
|
||||
// for details.
|
||||
e.writeNullString(clientID)
|
||||
e.writeUnsignedVarInt(0)
|
||||
} else {
|
||||
// Technically, recent versions of kafka interpret this field as a nullable
|
||||
// string, however kafka 0.10 expected a non-nullable string and fails with
|
||||
// a NullPointerException when it receives a null client id.
|
||||
e.writeString(clientID)
|
||||
}
|
||||
r.encode(e, v)
|
||||
err := e.err
|
||||
|
||||
if err == nil {
|
||||
size := packUint32(uint32(b.Size()) - 4)
|
||||
b.WriteAt(size[:], 0)
|
||||
_, err = b.WriteTo(w)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
345
tap/extensions/kafka/response.go
Normal file
345
tap/extensions/kafka/response.go
Normal file
@@ -0,0 +1,345 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
type Response struct {
|
||||
Size int32
|
||||
CorrelationID int32
|
||||
Payload interface{}
|
||||
CaptureTime time.Time
|
||||
}
|
||||
|
||||
func ReadResponse(r io.Reader, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter) (err error) {
|
||||
d := &decoder{reader: r, remain: 4}
|
||||
size := d.readInt32()
|
||||
|
||||
if size > 1000000 {
|
||||
return fmt.Errorf("A Kafka message cannot be bigger than 1MB")
|
||||
}
|
||||
|
||||
if size < 4 {
|
||||
return fmt.Errorf("A Kafka response header cannot be smaller than 8 bytes")
|
||||
}
|
||||
|
||||
if err = d.err; err != nil {
|
||||
err = dontExpectEOF(err)
|
||||
return err
|
||||
}
|
||||
|
||||
d.remain = int(size)
|
||||
correlationID := d.readInt32()
|
||||
var payload interface{}
|
||||
response := &Response{
|
||||
Size: size,
|
||||
CorrelationID: correlationID,
|
||||
Payload: payload,
|
||||
CaptureTime: superTimer.CaptureTime,
|
||||
}
|
||||
|
||||
key := fmt.Sprintf(
|
||||
"%s:%s->%s:%s::%d",
|
||||
tcpID.DstIP,
|
||||
tcpID.DstPort,
|
||||
tcpID.SrcIP,
|
||||
tcpID.SrcPort,
|
||||
correlationID,
|
||||
)
|
||||
reqResPair := reqResMatcher.registerResponse(key, response)
|
||||
if reqResPair == nil {
|
||||
return fmt.Errorf("Couldn't match a Kafka response to a Kafka request in 3 seconds!")
|
||||
}
|
||||
apiKey := reqResPair.Request.ApiKey
|
||||
apiVersion := reqResPair.Request.ApiVersion
|
||||
|
||||
switch apiKey {
|
||||
case Metadata:
|
||||
var mt interface{}
|
||||
var metadataResponse interface{}
|
||||
if apiVersion >= 11 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV11{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV11{}
|
||||
} else if apiVersion >= 10 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV10{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV10{}
|
||||
} else if apiVersion >= 8 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV8{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV8{}
|
||||
} else if apiVersion >= 7 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV7{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV7{}
|
||||
} else if apiVersion >= 5 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV5{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV5{}
|
||||
} else if apiVersion >= 3 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV3{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV3{}
|
||||
} else if apiVersion >= 2 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV2{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV2{}
|
||||
} else if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV1{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&MetadataResponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
metadataResponse = &MetadataResponseV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(metadataResponse))
|
||||
reqResPair.Response.Payload = metadataResponse
|
||||
break
|
||||
case ApiVersions:
|
||||
var mt interface{}
|
||||
var apiVersionsResponse interface{}
|
||||
if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&ApiVersionsResponseV1{}).Elem())
|
||||
mt = types[0]
|
||||
apiVersionsResponse = &ApiVersionsResponseV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&ApiVersionsResponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
apiVersionsResponse = &ApiVersionsResponseV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(apiVersionsResponse))
|
||||
reqResPair.Response.Payload = apiVersionsResponse
|
||||
break
|
||||
case Produce:
|
||||
var mt interface{}
|
||||
var produceResponse interface{}
|
||||
if apiVersion >= 8 {
|
||||
types := makeTypes(reflect.TypeOf(&ProduceResponseV8{}).Elem())
|
||||
mt = types[0]
|
||||
produceResponse = &ProduceResponseV8{}
|
||||
} else if apiVersion >= 5 {
|
||||
types := makeTypes(reflect.TypeOf(&ProduceResponseV5{}).Elem())
|
||||
mt = types[0]
|
||||
produceResponse = &ProduceResponseV5{}
|
||||
} else if apiVersion >= 2 {
|
||||
types := makeTypes(reflect.TypeOf(&ProduceResponseV2{}).Elem())
|
||||
mt = types[0]
|
||||
produceResponse = &ProduceResponseV2{}
|
||||
} else if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&ProduceResponseV1{}).Elem())
|
||||
mt = types[0]
|
||||
produceResponse = &ProduceResponseV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&ProduceResponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
produceResponse = &ProduceResponseV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(produceResponse))
|
||||
reqResPair.Response.Payload = produceResponse
|
||||
break
|
||||
case Fetch:
|
||||
var mt interface{}
|
||||
var fetchResponse interface{}
|
||||
if apiVersion >= 11 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchResponseV11{}).Elem())
|
||||
mt = types[0]
|
||||
fetchResponse = &FetchResponseV11{}
|
||||
} else if apiVersion >= 7 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchResponseV7{}).Elem())
|
||||
mt = types[0]
|
||||
fetchResponse = &FetchResponseV7{}
|
||||
} else if apiVersion >= 5 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchResponseV5{}).Elem())
|
||||
mt = types[0]
|
||||
fetchResponse = &FetchResponseV5{}
|
||||
} else if apiVersion >= 4 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchResponseV4{}).Elem())
|
||||
mt = types[0]
|
||||
fetchResponse = &FetchResponseV4{}
|
||||
} else if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&FetchResponseV1{}).Elem())
|
||||
mt = types[0]
|
||||
fetchResponse = &FetchResponseV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&FetchResponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
fetchResponse = &FetchResponseV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(fetchResponse))
|
||||
reqResPair.Response.Payload = fetchResponse
|
||||
break
|
||||
case ListOffsets:
|
||||
var mt interface{}
|
||||
var listOffsetsResponse interface{}
|
||||
if apiVersion >= 4 {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV4{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsResponse = &ListOffsetsResponseV4{}
|
||||
} else if apiVersion >= 2 {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV2{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsResponse = &ListOffsetsResponseV2{}
|
||||
} else if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV1{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsResponse = &ListOffsetsResponseV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
listOffsetsResponse = &ListOffsetsResponseV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(listOffsetsResponse))
|
||||
reqResPair.Response.Payload = listOffsetsResponse
|
||||
case CreateTopics:
|
||||
var mt interface{}
|
||||
var createTopicsResponse interface{}
|
||||
if apiVersion >= 7 {
|
||||
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
createTopicsResponse = &CreateTopicsResponseV0{}
|
||||
} else if apiVersion >= 5 {
|
||||
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV5{}).Elem())
|
||||
mt = types[0]
|
||||
createTopicsResponse = &CreateTopicsResponseV5{}
|
||||
} else if apiVersion >= 2 {
|
||||
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV2{}).Elem())
|
||||
mt = types[0]
|
||||
createTopicsResponse = &CreateTopicsResponseV2{}
|
||||
} else if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV1{}).Elem())
|
||||
mt = types[0]
|
||||
createTopicsResponse = &CreateTopicsResponseV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
createTopicsResponse = &CreateTopicsResponseV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(createTopicsResponse))
|
||||
reqResPair.Response.Payload = createTopicsResponse
|
||||
break
|
||||
case DeleteTopics:
|
||||
var mt interface{}
|
||||
var deleteTopicsResponse interface{}
|
||||
if apiVersion >= 6 {
|
||||
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV6{}).Elem())
|
||||
mt = types[0]
|
||||
deleteTopicsResponse = &DeleteTopicsReponseV6{}
|
||||
} else if apiVersion >= 5 {
|
||||
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV5{}).Elem())
|
||||
mt = types[0]
|
||||
deleteTopicsResponse = &DeleteTopicsReponseV5{}
|
||||
} else if apiVersion >= 1 {
|
||||
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV1{}).Elem())
|
||||
mt = types[0]
|
||||
deleteTopicsResponse = &DeleteTopicsReponseV1{}
|
||||
} else {
|
||||
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV0{}).Elem())
|
||||
mt = types[0]
|
||||
deleteTopicsResponse = &DeleteTopicsReponseV0{}
|
||||
}
|
||||
mt.(messageType).decode(d, valueOf(deleteTopicsResponse))
|
||||
reqResPair.Response.Payload = deleteTopicsResponse
|
||||
default:
|
||||
return fmt.Errorf("(Response) Not implemented: %s", apiKey)
|
||||
}
|
||||
|
||||
connectionInfo := &api.ConnectionInfo{
|
||||
ClientIP: tcpID.SrcIP,
|
||||
ClientPort: tcpID.SrcPort,
|
||||
ServerIP: tcpID.DstIP,
|
||||
ServerPort: tcpID.DstPort,
|
||||
IsOutgoing: true,
|
||||
}
|
||||
|
||||
item := &api.OutputChannelItem{
|
||||
Protocol: _protocol,
|
||||
Timestamp: reqResPair.Request.CaptureTime.UnixNano() / int64(time.Millisecond),
|
||||
ConnectionInfo: connectionInfo,
|
||||
Pair: &api.RequestResponsePair{
|
||||
Request: api.GenericMessage{
|
||||
IsRequest: true,
|
||||
CaptureTime: reqResPair.Request.CaptureTime,
|
||||
Payload: KafkaPayload{
|
||||
Data: &KafkaWrapper{
|
||||
Method: apiNames[apiKey],
|
||||
Url: "",
|
||||
Details: reqResPair.Request,
|
||||
},
|
||||
},
|
||||
},
|
||||
Response: api.GenericMessage{
|
||||
IsRequest: false,
|
||||
CaptureTime: reqResPair.Response.CaptureTime,
|
||||
Payload: KafkaPayload{
|
||||
Data: &KafkaWrapper{
|
||||
Method: apiNames[apiKey],
|
||||
Url: "",
|
||||
Details: reqResPair.Response,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
emitter.Emit(item)
|
||||
|
||||
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
|
||||
err = fmt.Errorf("unsupported api key: %d", i)
|
||||
return err
|
||||
}
|
||||
|
||||
t := &apiTypes[apiKey]
|
||||
if t == nil {
|
||||
err = fmt.Errorf("unsupported api: %s", apiNames[apiKey])
|
||||
return err
|
||||
}
|
||||
|
||||
d.discardAll()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func WriteResponse(w io.Writer, apiVersion int16, correlationID int32, msg Message) error {
|
||||
apiKey := msg.ApiKey()
|
||||
|
||||
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
|
||||
return fmt.Errorf("unsupported api key: %d", i)
|
||||
}
|
||||
|
||||
t := &apiTypes[apiKey]
|
||||
if t == nil {
|
||||
return fmt.Errorf("unsupported api: %s", apiNames[apiKey])
|
||||
}
|
||||
|
||||
minVersion := t.minVersion()
|
||||
maxVersion := t.maxVersion()
|
||||
|
||||
if apiVersion < minVersion || apiVersion > maxVersion {
|
||||
return fmt.Errorf("unsupported %s version: v%d not in range v%d-v%d", apiKey, apiVersion, minVersion, maxVersion)
|
||||
}
|
||||
|
||||
r := &t.responses[apiVersion-minVersion]
|
||||
v := valueOf(msg)
|
||||
b := newPageBuffer()
|
||||
defer b.unref()
|
||||
|
||||
e := &encoder{writer: b}
|
||||
e.writeInt32(0) // placeholder for the response size
|
||||
e.writeInt32(correlationID)
|
||||
r.encode(e, v)
|
||||
err := e.err
|
||||
|
||||
if err == nil {
|
||||
size := packUint32(uint32(b.Size()) - 4)
|
||||
b.WriteAt(size[:], 0)
|
||||
_, err = b.WriteTo(w)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
1000
tap/extensions/kafka/structs.go
Normal file
1000
tap/extensions/kafka/structs.go
Normal file
File diff suppressed because it is too large
Load Diff
13
tap/go.mod
13
tap/go.mod
@@ -3,10 +3,15 @@ module github.com/up9inc/mizu/tap
|
||||
go 1.16
|
||||
|
||||
require (
|
||||
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4 // indirect
|
||||
github.com/google/gopacket v1.1.19
|
||||
github.com/google/martian v2.1.0+incompatible
|
||||
github.com/orcaman/concurrent-map v0.0.0-20210106121528-16402b402231
|
||||
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7
|
||||
golang.org/x/net v0.0.0-20210421230115-4e50805a0758
|
||||
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4
|
||||
github.com/up9inc/mizu/tap/api v0.0.0
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7
|
||||
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d
|
||||
golang.org/x/text v0.3.5
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a
|
||||
)
|
||||
|
||||
replace github.com/up9inc/mizu/tap/api v0.0.0 => ./api
|
||||
|
||||
34
tap/go.sum
34
tap/go.sum
@@ -2,30 +2,44 @@ github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4 h1:NJOOlc6ZJjix
|
||||
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4/go.mod h1:DQPxZS994Ld1Y8uwnJT+dRL04XPD0cElP/pHH/zEBHM=
|
||||
github.com/google/gopacket v1.1.19 h1:ves8RnFZPGiFnTS0uPQStjwru6uO6h+nlr9j6fL7kF8=
|
||||
github.com/google/gopacket v1.1.19/go.mod h1:iJ8V8n6KS+z2U1A8pUwu8bW5SyEMkXJB8Yo/Vo+TKTo=
|
||||
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/orcaman/concurrent-map v0.0.0-20210106121528-16402b402231 h1:fa50YL1pzKW+1SsBnJDOHppJN9stOEwS+CRWyUtyYGU=
|
||||
github.com/orcaman/concurrent-map v0.0.0-20210106121528-16402b402231/go.mod h1:Lu3tH6HLW3feq74c2GC+jIMS/K2CFcDWnWD9XkenwhI=
|
||||
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7 h1:jkvpcEatpwuMF5O5LVxTnehj6YZ/aEZN4NWD/Xml4pI=
|
||||
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7/go.mod h1:KTrHyWpO1sevuXPZwyeZc72ddWRFqNSKDFl7uVWKpg0=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20210421230115-4e50805a0758 h1:aEpZnXcAmXkd6AvLb2OPt+EN1Zu/8Ne3pCqPjja5PXY=
|
||||
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
|
||||
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe h1:WdX7u8s3yOigWAhHEaDl8r9G+4XwFQEQFtBMYyN+kXQ=
|
||||
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073 h1:8qxJSnu+7dRq6upnbntrmriWByIakBuct5OM/MdQC1M=
|
||||
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.4 h1:0YWbFKbhXG/wIiuHDSKpS0Iy7FSA+u45VtBMfQcFTTc=
|
||||
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
|
||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a h1:CB3a9Nez8M13wwlr/E2YtwoU+qYHKfC+JrDa45RXXoQ=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
|
||||
@@ -1,274 +0,0 @@
|
||||
package tap
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/martian/har"
|
||||
)
|
||||
|
||||
const readPermission = 0644
|
||||
const harFilenameSuffix = ".har"
|
||||
const tempFilenameSuffix = ".har.tmp"
|
||||
|
||||
type PairChanItem struct {
|
||||
Request *http.Request
|
||||
RequestTime time.Time
|
||||
Response *http.Response
|
||||
ResponseTime time.Time
|
||||
RequestSenderIp string
|
||||
ConnectionInfo *ConnectionInfo
|
||||
}
|
||||
|
||||
func openNewHarFile(filename string) *HarFile {
|
||||
file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, readPermission)
|
||||
if err != nil {
|
||||
log.Panicf("Failed to open output file: %s (%v,%+v)", err, err, err)
|
||||
}
|
||||
|
||||
harFile := HarFile{file: file, entryCount: 0}
|
||||
harFile.writeHeader()
|
||||
|
||||
return &harFile
|
||||
}
|
||||
|
||||
type HarFile struct {
|
||||
file *os.File
|
||||
entryCount int
|
||||
}
|
||||
|
||||
func NewEntry(request *http.Request, requestTime time.Time, response *http.Response, responseTime time.Time) (*har.Entry, error) {
|
||||
harRequest, err := har.NewRequest(request, false)
|
||||
if err != nil {
|
||||
SilentError("convert-request-to-har", "Failed converting request to HAR %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("Failed converting request to HAR")
|
||||
}
|
||||
|
||||
// For requests with multipart/form-data or application/x-www-form-urlencoded Content-Type,
|
||||
// martian/har will parse the request body and place the parameters in harRequest.PostData.Params
|
||||
// instead of harRequest.PostData.Text (as the HAR spec requires it).
|
||||
// Mizu currently only looks at PostData.Text. Therefore, instead of letting martian/har set the content of
|
||||
// PostData, always copy the request body to PostData.Text.
|
||||
if (request.ContentLength > 0) {
|
||||
reqBody, err := ioutil.ReadAll(request.Body)
|
||||
if err != nil {
|
||||
SilentError("read-request-body", "Failed converting request to HAR %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("Failed reading request body")
|
||||
}
|
||||
request.Body = ioutil.NopCloser(bytes.NewReader(reqBody))
|
||||
harRequest.PostData.Text = string(reqBody)
|
||||
}
|
||||
|
||||
harResponse, err := har.NewResponse(response, true)
|
||||
if err != nil {
|
||||
SilentError("convert-response-to-har", "Failed converting response to HAR %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("Failed converting response to HAR")
|
||||
}
|
||||
|
||||
if harRequest.PostData != nil && strings.HasPrefix(harRequest.PostData.MimeType, "application/grpc") {
|
||||
// Force HTTP/2 gRPC into HAR template
|
||||
|
||||
harRequest.URL = fmt.Sprintf("%s://%s%s", request.Header.Get(":scheme"), request.Header.Get(":authority"), request.Header.Get(":path"))
|
||||
|
||||
status, err := strconv.Atoi(response.Header.Get(":status"))
|
||||
if err != nil {
|
||||
SilentError("convert-response-status-for-har", "Failed converting status to int %s (%v,%+v)", err, err, err)
|
||||
return nil, errors.New("Failed converting response status to int for HAR")
|
||||
}
|
||||
harResponse.Status = status
|
||||
} else {
|
||||
// Martian copies http.Request.URL.String() to har.Request.URL, which usually contains the path.
|
||||
// However, according to the HAR spec, the URL field needs to be the absolute URL.
|
||||
var scheme string
|
||||
if request.URL.Scheme != "" {
|
||||
scheme = request.URL.Scheme
|
||||
} else {
|
||||
scheme = "http"
|
||||
}
|
||||
harRequest.URL = fmt.Sprintf("%s://%s%s", scheme, request.Host, request.URL)
|
||||
}
|
||||
|
||||
totalTime := responseTime.Sub(requestTime).Round(time.Millisecond).Milliseconds()
|
||||
if totalTime < 1 {
|
||||
totalTime = 1
|
||||
}
|
||||
|
||||
harEntry := har.Entry{
|
||||
StartedDateTime: time.Now().UTC(),
|
||||
Time: totalTime,
|
||||
Request: harRequest,
|
||||
Response: harResponse,
|
||||
Cache: &har.Cache{},
|
||||
Timings: &har.Timings{
|
||||
Send: -1,
|
||||
Wait: -1,
|
||||
Receive: totalTime,
|
||||
},
|
||||
}
|
||||
|
||||
return &harEntry, nil
|
||||
}
|
||||
|
||||
func (f *HarFile) WriteEntry(harEntry *har.Entry) {
|
||||
harEntryJson, err := json.Marshal(harEntry)
|
||||
if err != nil {
|
||||
SilentError("har-entry-marshal", "Failed converting har entry object to JSON%s (%v,%+v)", err, err, err)
|
||||
return
|
||||
}
|
||||
|
||||
var separator string
|
||||
if f.GetEntryCount() > 0 {
|
||||
separator = ","
|
||||
} else {
|
||||
separator = ""
|
||||
}
|
||||
|
||||
harEntryString := append([]byte(separator), harEntryJson...)
|
||||
|
||||
if _, err := f.file.Write(harEntryString); err != nil {
|
||||
log.Panicf("Failed to write to output file: %s (%v,%+v)", err, err, err)
|
||||
}
|
||||
|
||||
f.entryCount++
|
||||
}
|
||||
|
||||
func (f *HarFile) GetEntryCount() int {
|
||||
return f.entryCount
|
||||
}
|
||||
|
||||
func (f *HarFile) Close() {
|
||||
f.writeTrailer()
|
||||
|
||||
err := f.file.Close()
|
||||
if err != nil {
|
||||
log.Panicf("Failed to close output file: %s (%v,%+v)", err, err, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (f *HarFile) writeHeader() {
|
||||
header := []byte(`{"log": {"version": "1.2", "creator": {"name": "Mizu", "version": "0.0.1"}, "entries": [`)
|
||||
if _, err := f.file.Write(header); err != nil {
|
||||
log.Panicf("Failed to write header to output file: %s (%v,%+v)", err, err, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (f *HarFile) writeTrailer() {
|
||||
trailer := []byte("]}}")
|
||||
if _, err := f.file.Write(trailer); err != nil {
|
||||
log.Panicf("Failed to write trailer to output file: %s (%v,%+v)", err, err, err)
|
||||
}
|
||||
}
|
||||
|
||||
func NewHarWriter(outputDir string, maxEntries int) *HarWriter {
|
||||
return &HarWriter{
|
||||
OutputDirPath: outputDir,
|
||||
MaxEntries: maxEntries,
|
||||
PairChan: make(chan *PairChanItem),
|
||||
OutChan: make(chan *OutputChannelItem, 1000),
|
||||
currentFile: nil,
|
||||
done: make(chan bool),
|
||||
}
|
||||
}
|
||||
|
||||
type OutputChannelItem struct {
|
||||
HarEntry *har.Entry
|
||||
ConnectionInfo *ConnectionInfo
|
||||
ValidationRulesChecker string
|
||||
}
|
||||
|
||||
type HarWriter struct {
|
||||
OutputDirPath string
|
||||
MaxEntries int
|
||||
PairChan chan *PairChanItem
|
||||
OutChan chan *OutputChannelItem
|
||||
currentFile *HarFile
|
||||
done chan bool
|
||||
}
|
||||
|
||||
func (hw *HarWriter) WritePair(request *http.Request, requestTime time.Time, response *http.Response, responseTime time.Time, connectionInfo *ConnectionInfo) {
|
||||
hw.PairChan <- &PairChanItem{
|
||||
Request: request,
|
||||
RequestTime: requestTime,
|
||||
Response: response,
|
||||
ResponseTime: responseTime,
|
||||
ConnectionInfo: connectionInfo,
|
||||
}
|
||||
}
|
||||
|
||||
func (hw *HarWriter) Start() {
|
||||
if hw.OutputDirPath != "" {
|
||||
if err := os.MkdirAll(hw.OutputDirPath, os.ModePerm); err != nil {
|
||||
log.Panicf("Failed to create output directory: %s (%v,%+v)", err, err, err)
|
||||
}
|
||||
}
|
||||
|
||||
go func() {
|
||||
for pair := range hw.PairChan {
|
||||
harEntry, err := NewEntry(pair.Request, pair.RequestTime, pair.Response, pair.ResponseTime)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if hw.OutputDirPath != "" {
|
||||
if hw.currentFile == nil {
|
||||
hw.openNewFile()
|
||||
}
|
||||
|
||||
hw.currentFile.WriteEntry(harEntry)
|
||||
|
||||
if hw.currentFile.GetEntryCount() >= hw.MaxEntries {
|
||||
hw.closeFile()
|
||||
}
|
||||
} else {
|
||||
hw.OutChan <- &OutputChannelItem{
|
||||
HarEntry: harEntry,
|
||||
ConnectionInfo: pair.ConnectionInfo,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if hw.currentFile != nil {
|
||||
hw.closeFile()
|
||||
}
|
||||
hw.done <- true
|
||||
}()
|
||||
}
|
||||
|
||||
func (hw *HarWriter) Stop() {
|
||||
close(hw.PairChan)
|
||||
<-hw.done
|
||||
close(hw.OutChan)
|
||||
}
|
||||
|
||||
func (hw *HarWriter) openNewFile() {
|
||||
filename := buildFilename(hw.OutputDirPath, time.Now(), tempFilenameSuffix)
|
||||
hw.currentFile = openNewHarFile(filename)
|
||||
}
|
||||
|
||||
func (hw *HarWriter) closeFile() {
|
||||
hw.currentFile.Close()
|
||||
tmpFilename := hw.currentFile.file.Name()
|
||||
hw.currentFile = nil
|
||||
|
||||
filename := buildFilename(hw.OutputDirPath, time.Now(), harFilenameSuffix)
|
||||
err := os.Rename(tmpFilename, filename)
|
||||
if err != nil {
|
||||
SilentError("Rename-file", "cannot rename file: %s (%v,%+v)", err, err, err)
|
||||
}
|
||||
}
|
||||
|
||||
func buildFilename(dir string, t time.Time, suffix string) string {
|
||||
// (epoch time in nanoseconds)__(YYYY_Month_DD__hh-mm-ss).har
|
||||
filename := fmt.Sprintf("%d__%s%s", t.UnixNano(), t.Format("2006_Jan_02__15-04-05"), suffix)
|
||||
return filepath.Join(dir, filename)
|
||||
}
|
||||
@@ -1,122 +0,0 @@
|
||||
package tap
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/orcaman/concurrent-map"
|
||||
)
|
||||
|
||||
type requestResponsePair struct {
|
||||
Request httpMessage `json:"request"`
|
||||
Response httpMessage `json:"response"`
|
||||
}
|
||||
|
||||
type httpMessage struct {
|
||||
isRequest bool
|
||||
captureTime time.Time
|
||||
orig interface{}
|
||||
}
|
||||
|
||||
|
||||
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}
|
||||
type requestResponseMatcher struct {
|
||||
openMessagesMap cmap.ConcurrentMap
|
||||
|
||||
}
|
||||
|
||||
func createResponseRequestMatcher() requestResponseMatcher {
|
||||
newMatcher := &requestResponseMatcher{openMessagesMap: cmap.New()}
|
||||
return *newMatcher
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) registerRequest(ident string, request *http.Request, captureTime time.Time) *requestResponsePair {
|
||||
split := splitIdent(ident)
|
||||
key := genKey(split)
|
||||
|
||||
requestHTTPMessage := httpMessage{
|
||||
isRequest: true,
|
||||
captureTime: captureTime,
|
||||
orig: request,
|
||||
}
|
||||
|
||||
if response, found := matcher.openMessagesMap.Pop(key); found {
|
||||
// Type assertion always succeeds because all of the map's values are of httpMessage type
|
||||
responseHTTPMessage := response.(*httpMessage)
|
||||
if responseHTTPMessage.isRequest {
|
||||
SilentError("Request-Duplicate", "Got duplicate request with same identifier")
|
||||
return nil
|
||||
}
|
||||
Trace("Matched open Response for %s", key)
|
||||
return matcher.preparePair(&requestHTTPMessage, responseHTTPMessage)
|
||||
}
|
||||
|
||||
matcher.openMessagesMap.Set(key, &requestHTTPMessage)
|
||||
Trace("Registered open Request for %s", key)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) registerResponse(ident string, response *http.Response, captureTime time.Time) *requestResponsePair {
|
||||
split := splitIdent(ident)
|
||||
key := genKey(split)
|
||||
|
||||
responseHTTPMessage := httpMessage{
|
||||
isRequest: false,
|
||||
captureTime: captureTime,
|
||||
orig: response,
|
||||
}
|
||||
|
||||
if request, found := matcher.openMessagesMap.Pop(key); found {
|
||||
// Type assertion always succeeds because all of the map's values are of httpMessage type
|
||||
requestHTTPMessage := request.(*httpMessage)
|
||||
if !requestHTTPMessage.isRequest {
|
||||
SilentError("Response-Duplicate", "Got duplicate response with same identifier")
|
||||
return nil
|
||||
}
|
||||
Trace("Matched open Request for %s", key)
|
||||
return matcher.preparePair(requestHTTPMessage, &responseHTTPMessage)
|
||||
}
|
||||
|
||||
matcher.openMessagesMap.Set(key, &responseHTTPMessage)
|
||||
Trace("Registered open Response for %s", key)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) preparePair(requestHTTPMessage *httpMessage, responseHTTPMessage *httpMessage) *requestResponsePair {
|
||||
return &requestResponsePair{
|
||||
Request: *requestHTTPMessage,
|
||||
Response: *responseHTTPMessage,
|
||||
}
|
||||
}
|
||||
|
||||
func splitIdent(ident string) []string {
|
||||
ident = strings.Replace(ident, "->", " ", -1)
|
||||
return strings.Split(ident, " ")
|
||||
}
|
||||
|
||||
func genKey(split []string) string {
|
||||
key := fmt.Sprintf("%s:%s->%s:%s,%s", split[0], split[2], split[1], split[3], split[4])
|
||||
return key
|
||||
}
|
||||
|
||||
func (matcher *requestResponseMatcher) deleteOlderThan(t time.Time) int {
|
||||
keysToPop := make([]string, 0)
|
||||
for item := range matcher.openMessagesMap.IterBuffered() {
|
||||
// Map only contains values of type httpMessage
|
||||
message, _ := item.Val.(*httpMessage)
|
||||
|
||||
if message.captureTime.Before(t) {
|
||||
keysToPop = append(keysToPop, item.Key)
|
||||
}
|
||||
}
|
||||
|
||||
numDeleted := len(keysToPop)
|
||||
|
||||
for _, key := range keysToPop {
|
||||
_, _ = matcher.openMessagesMap.Pop(key)
|
||||
}
|
||||
|
||||
return numDeleted
|
||||
}
|
||||
@@ -1,305 +0,0 @@
|
||||
package tap
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"github.com/bradleyfalzon/tlsx"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const checkTLSPacketAmount = 100
|
||||
|
||||
type httpReaderDataMsg struct {
|
||||
bytes []byte
|
||||
timestamp time.Time
|
||||
}
|
||||
|
||||
type tcpID struct {
|
||||
srcIP string
|
||||
dstIP string
|
||||
srcPort string
|
||||
dstPort string
|
||||
}
|
||||
|
||||
type ConnectionInfo struct {
|
||||
ClientIP string
|
||||
ClientPort string
|
||||
ServerIP string
|
||||
ServerPort string
|
||||
IsOutgoing bool
|
||||
}
|
||||
|
||||
func (tid *tcpID) String() string {
|
||||
return fmt.Sprintf("%s->%s %s->%s", tid.srcIP, tid.dstIP, tid.srcPort, tid.dstPort)
|
||||
}
|
||||
|
||||
/* httpReader gets reads from a channel of bytes of tcp payload, and parses it into HTTP/1 requests and responses.
|
||||
* The payload is written to the channel by a tcpStream object that is dedicated to one tcp connection.
|
||||
* An httpReader object is unidirectional: it parses either a client stream or a server stream.
|
||||
* Implements io.Reader interface (Read)
|
||||
*/
|
||||
type httpReader struct {
|
||||
ident string
|
||||
tcpID tcpID
|
||||
isClient bool
|
||||
isHTTP2 bool
|
||||
isOutgoing bool
|
||||
msgQueue chan httpReaderDataMsg // Channel of captured reassembled tcp payload
|
||||
data []byte
|
||||
captureTime time.Time
|
||||
hexdump bool
|
||||
parent *tcpStream
|
||||
grpcAssembler GrpcAssembler
|
||||
messageCount uint
|
||||
harWriter *HarWriter
|
||||
packetsSeen uint
|
||||
outboundLinkWriter *OutboundLinkWriter
|
||||
}
|
||||
|
||||
func (h *httpReader) Read(p []byte) (int, error) {
|
||||
var msg httpReaderDataMsg
|
||||
|
||||
ok := true
|
||||
for ok && len(h.data) == 0 {
|
||||
msg, ok = <-h.msgQueue
|
||||
h.data = msg.bytes
|
||||
|
||||
h.captureTime = msg.timestamp
|
||||
if len(h.data) > 0 {
|
||||
h.packetsSeen += 1
|
||||
}
|
||||
if h.packetsSeen < checkTLSPacketAmount && len(msg.bytes) > 5 { // packets with less than 5 bytes cause tlsx to panic
|
||||
clientHello := tlsx.ClientHello{}
|
||||
err := clientHello.Unmarshall(msg.bytes)
|
||||
if err == nil {
|
||||
statsTracker.incTlsConnectionsCount()
|
||||
Debug("Detected TLS client hello with SNI %s\n", clientHello.SNI)
|
||||
numericPort, _ := strconv.Atoi(h.tcpID.dstPort)
|
||||
h.outboundLinkWriter.WriteOutboundLink(h.tcpID.srcIP, h.tcpID.dstIP, numericPort, clientHello.SNI, TLSProtocol)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !ok || len(h.data) == 0 {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
l := copy(p, h.data)
|
||||
h.data = h.data[l:]
|
||||
return l, nil
|
||||
}
|
||||
|
||||
func (h *httpReader) run(wg *sync.WaitGroup) {
|
||||
defer wg.Done()
|
||||
b := bufio.NewReader(h)
|
||||
|
||||
if isHTTP2, err := checkIsHTTP2Connection(b, h.isClient); err != nil {
|
||||
SilentError("HTTP/2-Prepare-Connection", "stream %s Failed to check if client is HTTP/2: %s (%v,%+v)", h.ident, err, err, err)
|
||||
// Do something?
|
||||
} else {
|
||||
h.isHTTP2 = isHTTP2
|
||||
}
|
||||
|
||||
if h.isHTTP2 {
|
||||
err := prepareHTTP2Connection(b, h.isClient)
|
||||
if err != nil {
|
||||
SilentError("HTTP/2-Prepare-Connection-After-Check", "stream %s error: %s (%v,%+v)", h.ident, err, err, err)
|
||||
}
|
||||
h.grpcAssembler = createGrpcAssembler(b)
|
||||
}
|
||||
|
||||
for true {
|
||||
if h.isHTTP2 {
|
||||
err := h.handleHTTP2Stream()
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
SilentError("HTTP/2", "stream %s error: %s (%v,%+v)", h.ident, err, err, err)
|
||||
continue
|
||||
}
|
||||
} else if h.isClient {
|
||||
err := h.handleHTTP1ClientStream(b)
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
SilentError("HTTP-request", "stream %s Request error: %s (%v,%+v)", h.ident, err, err, err)
|
||||
continue
|
||||
}
|
||||
} else {
|
||||
err := h.handleHTTP1ServerStream(b)
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
SilentError("HTTP-response", "stream %s Response error: %s (%v,%+v)", h.ident, err, err, err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *httpReader) handleHTTP2Stream() error {
|
||||
streamID, messageHTTP1, err := h.grpcAssembler.readMessage()
|
||||
h.messageCount++
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var reqResPair *requestResponsePair
|
||||
var connectionInfo *ConnectionInfo
|
||||
|
||||
switch messageHTTP1 := messageHTTP1.(type) {
|
||||
case http.Request:
|
||||
ident := fmt.Sprintf("%s->%s %s->%s %d", h.tcpID.srcIP, h.tcpID.dstIP, h.tcpID.srcPort, h.tcpID.dstPort, streamID)
|
||||
connectionInfo = &ConnectionInfo{
|
||||
ClientIP: h.tcpID.srcIP,
|
||||
ClientPort: h.tcpID.srcPort,
|
||||
ServerIP: h.tcpID.dstIP,
|
||||
ServerPort: h.tcpID.dstPort,
|
||||
IsOutgoing: h.isOutgoing,
|
||||
}
|
||||
reqResPair = reqResMatcher.registerRequest(ident, &messageHTTP1, h.captureTime)
|
||||
case http.Response:
|
||||
ident := fmt.Sprintf("%s->%s %s->%s %d", h.tcpID.dstIP, h.tcpID.srcIP, h.tcpID.dstPort, h.tcpID.srcPort, streamID)
|
||||
connectionInfo = &ConnectionInfo{
|
||||
ClientIP: h.tcpID.dstIP,
|
||||
ClientPort: h.tcpID.dstPort,
|
||||
ServerIP: h.tcpID.srcIP,
|
||||
ServerPort: h.tcpID.srcPort,
|
||||
IsOutgoing: h.isOutgoing,
|
||||
}
|
||||
reqResPair = reqResMatcher.registerResponse(ident, &messageHTTP1, h.captureTime)
|
||||
}
|
||||
|
||||
if reqResPair != nil {
|
||||
statsTracker.incMatchedPairs()
|
||||
|
||||
if h.harWriter != nil {
|
||||
h.harWriter.WritePair(
|
||||
reqResPair.Request.orig.(*http.Request),
|
||||
reqResPair.Request.captureTime,
|
||||
reqResPair.Response.orig.(*http.Response),
|
||||
reqResPair.Response.captureTime,
|
||||
connectionInfo,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *httpReader) handleHTTP1ClientStream(b *bufio.Reader) error {
|
||||
req, err := http.ReadRequest(b)
|
||||
h.messageCount++
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
body, err := ioutil.ReadAll(req.Body)
|
||||
req.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
|
||||
s := len(body)
|
||||
if err != nil {
|
||||
SilentError("HTTP-request-body", "stream %s Got body err: %s", h.ident, err)
|
||||
} else if h.hexdump {
|
||||
Debug("Body(%d/0x%x) - %s", len(body), len(body), hex.Dump(body))
|
||||
}
|
||||
if err := req.Body.Close(); err != nil {
|
||||
SilentError("HTTP-request-body-close", "stream %s Failed to close request body: %s", h.ident, err)
|
||||
}
|
||||
encoding := req.Header["Content-Encoding"]
|
||||
Debug("HTTP/1 Request: %s %s %s (Body:%d) -> %s", h.ident, req.Method, req.URL, s, encoding)
|
||||
|
||||
ident := fmt.Sprintf("%s->%s %s->%s %d", h.tcpID.srcIP, h.tcpID.dstIP, h.tcpID.srcPort, h.tcpID.dstPort, h.messageCount)
|
||||
reqResPair := reqResMatcher.registerRequest(ident, req, h.captureTime)
|
||||
if reqResPair != nil {
|
||||
statsTracker.incMatchedPairs()
|
||||
|
||||
if h.harWriter != nil {
|
||||
h.harWriter.WritePair(
|
||||
reqResPair.Request.orig.(*http.Request),
|
||||
reqResPair.Request.captureTime,
|
||||
reqResPair.Response.orig.(*http.Response),
|
||||
reqResPair.Response.captureTime,
|
||||
&ConnectionInfo{
|
||||
ClientIP: h.tcpID.srcIP,
|
||||
ClientPort: h.tcpID.srcPort,
|
||||
ServerIP: h.tcpID.dstIP,
|
||||
ServerPort: h.tcpID.dstPort,
|
||||
IsOutgoing: h.isOutgoing,
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
h.parent.Lock()
|
||||
h.parent.urls = append(h.parent.urls, req.URL.String())
|
||||
h.parent.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *httpReader) handleHTTP1ServerStream(b *bufio.Reader) error {
|
||||
res, err := http.ReadResponse(b, nil)
|
||||
h.messageCount++
|
||||
var req string
|
||||
h.parent.Lock()
|
||||
if len(h.parent.urls) == 0 {
|
||||
req = fmt.Sprintf("<no-request-seen>")
|
||||
} else {
|
||||
req, h.parent.urls = h.parent.urls[0], h.parent.urls[1:]
|
||||
}
|
||||
h.parent.Unlock()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
res.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
|
||||
s := len(body)
|
||||
if err != nil {
|
||||
SilentError("HTTP-response-body", "HTTP/%s: failed to get body(parsed len:%d): %s", h.ident, s, err)
|
||||
}
|
||||
if h.hexdump {
|
||||
Debug("Body(%d/0x%x) - %s", len(body), len(body), hex.Dump(body))
|
||||
}
|
||||
if err := res.Body.Close(); err != nil {
|
||||
SilentError("HTTP-response-body-close", "HTTP/%s: failed to close body(parsed len:%d): %s", h.ident, s, err)
|
||||
}
|
||||
sym := ","
|
||||
if res.ContentLength > 0 && res.ContentLength != int64(s) {
|
||||
sym = "!="
|
||||
}
|
||||
contentType, ok := res.Header["Content-Type"]
|
||||
if !ok {
|
||||
contentType = []string{http.DetectContentType(body)}
|
||||
}
|
||||
encoding := res.Header["Content-Encoding"]
|
||||
Debug("HTTP/1 Response: %s %s URL:%s (%d%s%d%s) -> %s", h.ident, res.Status, req, res.ContentLength, sym, s, contentType, encoding)
|
||||
|
||||
ident := fmt.Sprintf("%s->%s %s->%s %d", h.tcpID.dstIP, h.tcpID.srcIP, h.tcpID.dstPort, h.tcpID.srcPort, h.messageCount)
|
||||
reqResPair := reqResMatcher.registerResponse(ident, res, h.captureTime)
|
||||
if reqResPair != nil {
|
||||
statsTracker.incMatchedPairs()
|
||||
|
||||
if h.harWriter != nil {
|
||||
h.harWriter.WritePair(
|
||||
reqResPair.Request.orig.(*http.Request),
|
||||
reqResPair.Request.captureTime,
|
||||
reqResPair.Response.orig.(*http.Response),
|
||||
reqResPair.Response.captureTime,
|
||||
&ConnectionInfo{
|
||||
ClientIP: h.tcpID.dstIP,
|
||||
ClientPort: h.tcpID.dstPort,
|
||||
ServerIP: h.tcpID.srcIP,
|
||||
ServerPort: h.tcpID.srcPort,
|
||||
IsOutgoing: h.isOutgoing,
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
@@ -31,28 +32,13 @@ import (
|
||||
"github.com/google/gopacket/layers" // pulls in all layers decoders
|
||||
"github.com/google/gopacket/pcap"
|
||||
"github.com/google/gopacket/reassembly"
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
const AppPortsEnvVar = "APP_PORTS"
|
||||
const maxHTTP2DataLenEnvVar = "HTTP2_DATA_SIZE_LIMIT"
|
||||
const maxHTTP2DataLenDefault = 1 * 1024 * 1024 // 1MB
|
||||
const cleanPeriod = time.Second * 10
|
||||
|
||||
var remoteOnlyOutboundPorts = []int{80, 443}
|
||||
|
||||
func parseAppPorts(appPortsList string) []int {
|
||||
ports := make([]int, 0)
|
||||
for _, portStr := range strings.Split(appPortsList, ",") {
|
||||
parsedInt, parseError := strconv.Atoi(portStr)
|
||||
if parseError != nil {
|
||||
log.Printf("Provided app port %v is not a valid number!", portStr)
|
||||
} else {
|
||||
ports = append(ports, parsedInt)
|
||||
}
|
||||
}
|
||||
return ports
|
||||
}
|
||||
|
||||
var maxcount = flag.Int64("c", -1, "Only grab this many packets, then exit")
|
||||
var decoder = flag.String("decoder", "", "Name of the decoder to use (default: guess from capture)")
|
||||
var statsevery = flag.Int("stats", 60, "Output statistics every N seconds")
|
||||
@@ -65,13 +51,6 @@ var allowmissinginit = flag.Bool("allowmissinginit", true, "Support streams with
|
||||
var verbose = flag.Bool("verbose", false, "Be verbose")
|
||||
var debug = flag.Bool("debug", false, "Display debug information")
|
||||
var quiet = flag.Bool("quiet", false, "Be quiet regarding errors")
|
||||
|
||||
// http
|
||||
var nohttp = flag.Bool("nohttp", false, "Disable HTTP parsing")
|
||||
var output = flag.String("output", "", "Path to create file for HTTP 200 OK responses")
|
||||
var writeincomplete = flag.Bool("writeincomplete", false, "Write incomplete response")
|
||||
|
||||
var hexdump = flag.Bool("dump", false, "Dump HTTP request/response as hex") // global
|
||||
var hexdumppkt = flag.Bool("dumppkt", false, "Dump packet as hex")
|
||||
|
||||
// capture
|
||||
@@ -80,17 +59,11 @@ var fname = flag.String("r", "", "Filename to read from, overrides -i")
|
||||
var snaplen = flag.Int("s", 65536, "Snap length (number of bytes max to read per packet")
|
||||
var tstype = flag.String("timestamp_type", "", "Type of timestamps to use")
|
||||
var promisc = flag.Bool("promisc", true, "Set promiscuous mode")
|
||||
var anydirection = flag.Bool("anydirection", false, "Capture http requests to other hosts")
|
||||
var staleTimeoutSeconds = flag.Int("staletimout", 120, "Max time in seconds to keep connections which don't transmit data")
|
||||
|
||||
var memprofile = flag.String("memprofile", "", "Write memory profile")
|
||||
|
||||
// output
|
||||
var HarOutputDir = flag.String("hardir", "", "Directory in which to store output har files")
|
||||
var harEntriesPerFile = flag.Int("harentriesperfile", 200, "Number of max number of har entries to store in each file")
|
||||
|
||||
var reqResMatcher = createResponseRequestMatcher() // global
|
||||
var statsTracker = StatsTracker{}
|
||||
var appStats = api.AppStats{}
|
||||
|
||||
// global
|
||||
var stats struct {
|
||||
@@ -119,8 +92,11 @@ var outputLevel int
|
||||
var errorsMap map[string]uint
|
||||
var errorsMapMutex sync.Mutex
|
||||
var nErrors uint
|
||||
var ownIps []string // global
|
||||
var hostMode bool // global
|
||||
var ownIps []string // global
|
||||
var hostMode bool // global
|
||||
var extensions []*api.Extension // global
|
||||
|
||||
const baseStreamChannelTimeoutMs int = 5000 * 100
|
||||
|
||||
/* minOutputLevel: Error will be printed only if outputLevel is above this value
|
||||
* t: key for errorsMap (counting errors)
|
||||
@@ -176,31 +152,43 @@ type Context struct {
|
||||
CaptureInfo gopacket.CaptureInfo
|
||||
}
|
||||
|
||||
func GetStats() AppStats {
|
||||
return statsTracker.appStats
|
||||
func GetStats() api.AppStats {
|
||||
return appStats
|
||||
}
|
||||
|
||||
func (c *Context) GetCaptureInfo() gopacket.CaptureInfo {
|
||||
return c.CaptureInfo
|
||||
}
|
||||
|
||||
func StartPassiveTapper(opts *TapOpts) (<-chan *OutputChannelItem, <-chan *OutboundLink) {
|
||||
func StartPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem, extensionsRef []*api.Extension) {
|
||||
hostMode = opts.HostMode
|
||||
extensions = extensionsRef
|
||||
|
||||
harWriter := NewHarWriter(*HarOutputDir, *harEntriesPerFile)
|
||||
outboundLinkWriter := NewOutboundLinkWriter()
|
||||
if GetMemoryProfilingEnabled() {
|
||||
startMemoryProfiler()
|
||||
}
|
||||
|
||||
go startPassiveTapper(harWriter, outboundLinkWriter)
|
||||
|
||||
return harWriter.OutChan, outboundLinkWriter.OutChan
|
||||
go startPassiveTapper(outputItems)
|
||||
}
|
||||
|
||||
func startMemoryProfiler() {
|
||||
dirname := "/app/pprof"
|
||||
rlog.Info("Profiling is on, results will be written to %s", dirname)
|
||||
dumpPath := "/app/pprof"
|
||||
envDumpPath := os.Getenv(MemoryProfilingDumpPath)
|
||||
if envDumpPath != "" {
|
||||
dumpPath = envDumpPath
|
||||
}
|
||||
timeInterval := 60
|
||||
envTimeInterval := os.Getenv(MemoryProfilingTimeIntervalSeconds)
|
||||
if envTimeInterval != "" {
|
||||
if i, err := strconv.Atoi(envTimeInterval); err == nil {
|
||||
timeInterval = i
|
||||
}
|
||||
}
|
||||
|
||||
rlog.Info("Profiling is on, results will be written to %s", dumpPath)
|
||||
go func() {
|
||||
if _, err := os.Stat(dirname); os.IsNotExist(err) {
|
||||
if err := os.Mkdir(dirname, 0777); err != nil {
|
||||
if _, err := os.Stat(dumpPath); os.IsNotExist(err) {
|
||||
if err := os.Mkdir(dumpPath, 0777); err != nil {
|
||||
log.Fatal("could not create directory for profile: ", err)
|
||||
}
|
||||
}
|
||||
@@ -208,9 +196,9 @@ func startMemoryProfiler() {
|
||||
for true {
|
||||
t := time.Now()
|
||||
|
||||
filename := fmt.Sprintf("%s/%s__mem.prof", dirname, t.Format("15_04_05"))
|
||||
filename := fmt.Sprintf("%s/%s__mem.prof", dumpPath, t.Format("15_04_05"))
|
||||
|
||||
rlog.Info("Writing memory profile to %s\n", filename)
|
||||
rlog.Infof("Writing memory profile to %s\n", filename)
|
||||
|
||||
f, err := os.Create(filename)
|
||||
if err != nil {
|
||||
@@ -221,13 +209,50 @@ func startMemoryProfiler() {
|
||||
log.Fatal("could not write memory profile: ", err)
|
||||
}
|
||||
_ = f.Close()
|
||||
time.Sleep(time.Minute)
|
||||
time.Sleep(time.Second * time.Duration(timeInterval))
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWriter) {
|
||||
func closeTimedoutTcpStreamChannels() {
|
||||
maxNumberOfGoroutines = GetMaxNumberOfGoroutines()
|
||||
TcpStreamChannelTimeoutMs := GetTcpChannelTimeoutMs()
|
||||
for {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
streams.Range(func(key interface{}, value interface{}) bool {
|
||||
streamWrapper := value.(*tcpStreamWrapper)
|
||||
stream := streamWrapper.stream
|
||||
if stream.superIdentifier.Protocol == nil {
|
||||
if !stream.isClosed && time.Now().After(streamWrapper.createdAt.Add(TcpStreamChannelTimeoutMs)) {
|
||||
stream.Close()
|
||||
appStats.IncDroppedTcpStreams()
|
||||
rlog.Debugf("Dropped an unidentified TCP stream because of timeout. Total dropped: %d Total Goroutines: %d Timeout (ms): %d\n", appStats.DroppedTcpStreams, runtime.NumGoroutine(), TcpStreamChannelTimeoutMs/1000000)
|
||||
}
|
||||
} else {
|
||||
if !stream.superIdentifier.IsClosedOthers {
|
||||
for i := range stream.clients {
|
||||
reader := &stream.clients[i]
|
||||
if reader.extension.Protocol != stream.superIdentifier.Protocol {
|
||||
reader.Close()
|
||||
}
|
||||
}
|
||||
for i := range stream.servers {
|
||||
reader := &stream.servers[i]
|
||||
if reader.extension.Protocol != stream.superIdentifier.Protocol {
|
||||
reader.Close()
|
||||
}
|
||||
}
|
||||
stream.superIdentifier.IsClosedOthers = true
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func startPassiveTapper(outputItems chan *api.OutputChannelItem) {
|
||||
log.SetFlags(log.LstdFlags | log.LUTC | log.Lshortfile)
|
||||
go closeTimedoutTcpStreamChannels()
|
||||
|
||||
defer util.Run()()
|
||||
if *debug {
|
||||
@@ -248,31 +273,6 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
ownIps = localhostIPs
|
||||
}
|
||||
|
||||
appPortsStr := os.Getenv(AppPortsEnvVar)
|
||||
var appPorts []int
|
||||
if appPortsStr == "" {
|
||||
rlog.Info("Received empty/no APP_PORTS env var! only listening to http on port 80!")
|
||||
appPorts = make([]int, 0)
|
||||
} else {
|
||||
appPorts = parseAppPorts(appPortsStr)
|
||||
}
|
||||
SetFilterPorts(appPorts)
|
||||
envVal := os.Getenv(maxHTTP2DataLenEnvVar)
|
||||
if envVal == "" {
|
||||
rlog.Infof("Received empty/no HTTP2_DATA_SIZE_LIMIT env var! falling back to %v", maxHTTP2DataLenDefault)
|
||||
maxHTTP2DataLen = maxHTTP2DataLenDefault
|
||||
} else {
|
||||
if convertedInt, err := strconv.Atoi(envVal); err != nil {
|
||||
rlog.Infof("Received invalid HTTP2_DATA_SIZE_LIMIT env var! falling back to %v", maxHTTP2DataLenDefault)
|
||||
maxHTTP2DataLen = maxHTTP2DataLenDefault
|
||||
} else {
|
||||
rlog.Infof("Received HTTP2_DATA_SIZE_LIMIT env var: %v", maxHTTP2DataLenDefault)
|
||||
maxHTTP2DataLen = convertedInt
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("App Ports: %v", gSettings.filterPorts)
|
||||
|
||||
var handle *pcap.Handle
|
||||
var err error
|
||||
if *fname != "" {
|
||||
@@ -315,10 +315,6 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
}
|
||||
}
|
||||
|
||||
harWriter.Start()
|
||||
defer harWriter.Stop()
|
||||
defer outboundLinkWriter.Stop()
|
||||
|
||||
var dec gopacket.Decoder
|
||||
var ok bool
|
||||
decoderName := *decoder
|
||||
@@ -332,13 +328,16 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
source.Lazy = *lazy
|
||||
source.NoCopy = true
|
||||
rlog.Info("Starting to read packets")
|
||||
statsTracker.setStartTime(time.Now())
|
||||
appStats.SetStartTime(time.Now())
|
||||
defragger := ip4defrag.NewIPv4Defragmenter()
|
||||
|
||||
var emitter api.Emitter = &api.Emitting{
|
||||
AppStats: &appStats,
|
||||
OutputChannel: outputItems,
|
||||
}
|
||||
|
||||
streamFactory := &tcpStreamFactory{
|
||||
doHTTP: !*nohttp,
|
||||
harWriter: harWriter,
|
||||
outbountLinkWriter: outboundLinkWriter,
|
||||
Emitter: emitter,
|
||||
}
|
||||
streamPool := reassembly.NewStreamPool(streamFactory)
|
||||
assembler := reassembly.NewAssembler(streamPool)
|
||||
@@ -358,7 +357,6 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
cleaner := Cleaner{
|
||||
assembler: assembler,
|
||||
assemblerMutex: &assemblerMutex,
|
||||
matcher: &reqResMatcher,
|
||||
cleanPeriod: cleanPeriod,
|
||||
connectionTimeout: staleConnectionTimeout,
|
||||
}
|
||||
@@ -377,7 +375,7 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
errorsSummery := fmt.Sprintf("%v", errorsMap)
|
||||
errorsMapMutex.Unlock()
|
||||
log.Printf("%v (errors: %v, errTypes:%v) - Errors Summary: %s",
|
||||
time.Since(statsTracker.appStats.StartTime),
|
||||
time.Since(appStats.StartTime),
|
||||
nErrors,
|
||||
errorMapLen,
|
||||
errorsSummery,
|
||||
@@ -387,10 +385,9 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
memStats := runtime.MemStats{}
|
||||
runtime.ReadMemStats(&memStats)
|
||||
log.Printf(
|
||||
"mem: %d, goroutines: %d, unmatched messages: %d",
|
||||
"mem: %d, goroutines: %d",
|
||||
memStats.HeapAlloc,
|
||||
runtime.NumGoroutine(),
|
||||
reqResMatcher.openMessagesMap.Count(),
|
||||
)
|
||||
|
||||
// Since the last print
|
||||
@@ -401,7 +398,7 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
cleanStats.closed,
|
||||
cleanStats.deleted,
|
||||
)
|
||||
currentAppStats := statsTracker.dumpStats()
|
||||
currentAppStats := appStats.DumpStats()
|
||||
appStatsJSON, _ := json.Marshal(currentAppStats)
|
||||
log.Printf("app stats - %v", string(appStatsJSON))
|
||||
}
|
||||
@@ -411,11 +408,18 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
startMemoryProfiler()
|
||||
}
|
||||
|
||||
for packet := range source.Packets() {
|
||||
packetsCount := statsTracker.incPacketsCount()
|
||||
for {
|
||||
packet, err := source.NextPacket()
|
||||
if err == io.EOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
rlog.Debugf("Error:", err)
|
||||
continue
|
||||
}
|
||||
packetsCount := appStats.IncPacketsCount()
|
||||
rlog.Debugf("PACKET #%d", packetsCount)
|
||||
data := packet.Data()
|
||||
statsTracker.updateProcessedBytes(int64(len(data)))
|
||||
appStats.UpdateProcessedBytes(uint64(len(data)))
|
||||
if *hexdumppkt {
|
||||
rlog.Debugf("Packet content (%d/0x%x) - %s", len(data), len(data), hex.Dump(data))
|
||||
}
|
||||
@@ -449,7 +453,7 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
|
||||
tcp := packet.Layer(layers.LayerTypeTCP)
|
||||
if tcp != nil {
|
||||
statsTracker.incTcpPacketsCount()
|
||||
appStats.IncTcpPacketsCount()
|
||||
tcp := tcp.(*layers.TCP)
|
||||
if *checksum {
|
||||
err := tcp.SetNetworkLayerForChecksum(packet.NetworkLayer())
|
||||
@@ -467,15 +471,15 @@ func startPassiveTapper(harWriter *HarWriter, outboundLinkWriter *OutboundLinkWr
|
||||
assemblerMutex.Unlock()
|
||||
}
|
||||
|
||||
done := *maxcount > 0 && statsTracker.appStats.PacketsCount >= *maxcount
|
||||
done := *maxcount > 0 && int64(appStats.PacketsCount) >= *maxcount
|
||||
if done {
|
||||
errorsMapMutex.Lock()
|
||||
errorMapLen := len(errorsMap)
|
||||
errorsMapMutex.Unlock()
|
||||
log.Printf("Processed %v packets (%v bytes) in %v (errors: %v, errTypes:%v)",
|
||||
statsTracker.appStats.PacketsCount,
|
||||
statsTracker.appStats.ProcessedBytes,
|
||||
time.Since(statsTracker.appStats.StartTime),
|
||||
appStats.PacketsCount,
|
||||
appStats.ProcessedBytes,
|
||||
time.Since(appStats.StartTime),
|
||||
nErrors,
|
||||
errorMapLen)
|
||||
}
|
||||
|
||||
@@ -3,36 +3,31 @@ package tap
|
||||
import (
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
MemoryProfilingEnabledEnvVarName = "MEMORY_PROFILING_ENABLED"
|
||||
MemoryProfilingDumpPath = "MEMORY_PROFILING_DUMP_PATH"
|
||||
MemoryProfilingTimeIntervalSeconds = "MEMORY_PROFILING_TIME_INTERVAL"
|
||||
MaxBufferedPagesTotalEnvVarName = "MAX_BUFFERED_PAGES_TOTAL"
|
||||
MaxBufferedPagesPerConnectionEnvVarName = "MAX_BUFFERED_PAGES_PER_CONNECTION"
|
||||
TcpStreamChannelTimeoutMsEnvVarName = "TCP_STREAM_CHANNEL_TIMEOUT_MS"
|
||||
MaxNumberOfGoroutinesEnvVarName = "MAX_NUMBER_OF_GOROUTINES"
|
||||
MaxBufferedPagesTotalDefaultValue = 5000
|
||||
MaxBufferedPagesPerConnectionDefaultValue = 5000
|
||||
TcpStreamChannelTimeoutMsDefaultValue = 5000
|
||||
MaxNumberOfGoroutinesDefaultValue = 4000
|
||||
)
|
||||
|
||||
type globalSettings struct {
|
||||
filterPorts []int
|
||||
filterAuthorities []string
|
||||
}
|
||||
|
||||
var gSettings = &globalSettings{
|
||||
filterPorts: []int{},
|
||||
filterAuthorities: []string{},
|
||||
}
|
||||
|
||||
func SetFilterPorts(ports []int) {
|
||||
gSettings.filterPorts = ports
|
||||
}
|
||||
|
||||
func GetFilterPorts() []int {
|
||||
ports := make([]int, len(gSettings.filterPorts))
|
||||
copy(ports, gSettings.filterPorts)
|
||||
return ports
|
||||
}
|
||||
|
||||
func SetFilterAuthorities(ipAddresses []string) {
|
||||
gSettings.filterAuthorities = ipAddresses
|
||||
}
|
||||
@@ -59,6 +54,22 @@ func GetMaxBufferedPagesPerConnection() int {
|
||||
return valueFromEnv
|
||||
}
|
||||
|
||||
func GetTcpChannelTimeoutMs() time.Duration {
|
||||
valueFromEnv, err := strconv.Atoi(os.Getenv(TcpStreamChannelTimeoutMsEnvVarName))
|
||||
if err != nil {
|
||||
return TcpStreamChannelTimeoutMsDefaultValue * time.Millisecond
|
||||
}
|
||||
return time.Duration(valueFromEnv) * time.Millisecond
|
||||
}
|
||||
|
||||
func GetMaxNumberOfGoroutines() int {
|
||||
valueFromEnv, err := strconv.Atoi(os.Getenv(MaxNumberOfGoroutinesEnvVarName))
|
||||
if err != nil {
|
||||
return MaxNumberOfGoroutinesDefaultValue
|
||||
}
|
||||
return valueFromEnv
|
||||
}
|
||||
|
||||
func GetMemoryProfilingEnabled() bool {
|
||||
return os.Getenv(MemoryProfilingEnabledEnvVarName) == "1"
|
||||
}
|
||||
|
||||
@@ -1,104 +0,0 @@
|
||||
package tap
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type AppStats struct {
|
||||
StartTime time.Time `json:"-"`
|
||||
ProcessedBytes int64 `json:"processedBytes"`
|
||||
PacketsCount int64 `json:"packetsCount"`
|
||||
TcpPacketsCount int64 `json:"tcpPacketsCount"`
|
||||
ReassembledTcpPayloadsCount int64 `json:"reassembledTcpPayloadsCount"`
|
||||
TlsConnectionsCount int64 `json:"tlsConnectionsCount"`
|
||||
MatchedPairs int64 `json:"matchedPairs"`
|
||||
}
|
||||
|
||||
type StatsTracker struct {
|
||||
appStats AppStats
|
||||
processedBytesMutex sync.Mutex
|
||||
packetsCountMutex sync.Mutex
|
||||
tcpPacketsCountMutex sync.Mutex
|
||||
reassembledTcpPayloadsCountMutex sync.Mutex
|
||||
tlsConnectionsCountMutex sync.Mutex
|
||||
matchedPairsMutex sync.Mutex
|
||||
}
|
||||
|
||||
func (st *StatsTracker) incMatchedPairs() {
|
||||
st.matchedPairsMutex.Lock()
|
||||
st.appStats.MatchedPairs++
|
||||
st.matchedPairsMutex.Unlock()
|
||||
}
|
||||
|
||||
func (st *StatsTracker) incPacketsCount() int64 {
|
||||
st.packetsCountMutex.Lock()
|
||||
st.appStats.PacketsCount++
|
||||
currentPacketsCount := st.appStats.PacketsCount
|
||||
st.packetsCountMutex.Unlock()
|
||||
return currentPacketsCount
|
||||
}
|
||||
|
||||
func (st *StatsTracker) incTcpPacketsCount() {
|
||||
st.tcpPacketsCountMutex.Lock()
|
||||
st.appStats.TcpPacketsCount++
|
||||
st.tcpPacketsCountMutex.Unlock()
|
||||
}
|
||||
|
||||
func (st *StatsTracker) incReassembledTcpPayloadsCount() {
|
||||
st.reassembledTcpPayloadsCountMutex.Lock()
|
||||
st.appStats.ReassembledTcpPayloadsCount++
|
||||
st.reassembledTcpPayloadsCountMutex.Unlock()
|
||||
}
|
||||
|
||||
func (st *StatsTracker) incTlsConnectionsCount() {
|
||||
st.tlsConnectionsCountMutex.Lock()
|
||||
st.appStats.TlsConnectionsCount++
|
||||
st.tlsConnectionsCountMutex.Unlock()
|
||||
}
|
||||
|
||||
func (st *StatsTracker) updateProcessedBytes(size int64) {
|
||||
st.processedBytesMutex.Lock()
|
||||
st.appStats.ProcessedBytes += size
|
||||
st.processedBytesMutex.Unlock()
|
||||
}
|
||||
|
||||
func (st *StatsTracker) setStartTime(startTime time.Time) {
|
||||
st.appStats.StartTime = startTime
|
||||
}
|
||||
|
||||
func (st *StatsTracker) dumpStats() *AppStats {
|
||||
currentAppStats := &AppStats{StartTime: st.appStats.StartTime}
|
||||
|
||||
st.processedBytesMutex.Lock()
|
||||
currentAppStats.ProcessedBytes = st.appStats.ProcessedBytes
|
||||
st.appStats.ProcessedBytes = 0
|
||||
st.processedBytesMutex.Unlock()
|
||||
|
||||
st.packetsCountMutex.Lock()
|
||||
currentAppStats.PacketsCount = st.appStats.PacketsCount
|
||||
st.appStats.PacketsCount = 0
|
||||
st.packetsCountMutex.Unlock()
|
||||
|
||||
st.tcpPacketsCountMutex.Lock()
|
||||
currentAppStats.TcpPacketsCount = st.appStats.TcpPacketsCount
|
||||
st.appStats.TcpPacketsCount = 0
|
||||
st.tcpPacketsCountMutex.Unlock()
|
||||
|
||||
st.reassembledTcpPayloadsCountMutex.Lock()
|
||||
currentAppStats.ReassembledTcpPayloadsCount = st.appStats.ReassembledTcpPayloadsCount
|
||||
st.appStats.ReassembledTcpPayloadsCount = 0
|
||||
st.reassembledTcpPayloadsCountMutex.Unlock()
|
||||
|
||||
st.tlsConnectionsCountMutex.Lock()
|
||||
currentAppStats.TlsConnectionsCount = st.appStats.TlsConnectionsCount
|
||||
st.appStats.TlsConnectionsCount = 0
|
||||
st.tlsConnectionsCountMutex.Unlock()
|
||||
|
||||
st.matchedPairsMutex.Lock()
|
||||
currentAppStats.MatchedPairs = st.appStats.MatchedPairs
|
||||
st.appStats.MatchedPairs = 0
|
||||
st.matchedPairsMutex.Unlock()
|
||||
|
||||
return currentAppStats
|
||||
}
|
||||
114
tap/tcp_reader.go
Normal file
114
tap/tcp_reader.go
Normal file
@@ -0,0 +1,114 @@
|
||||
package tap
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/bradleyfalzon/tlsx"
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
const checkTLSPacketAmount = 100
|
||||
|
||||
type tcpReaderDataMsg struct {
|
||||
bytes []byte
|
||||
timestamp time.Time
|
||||
}
|
||||
|
||||
type tcpID struct {
|
||||
srcIP string
|
||||
dstIP string
|
||||
srcPort string
|
||||
dstPort string
|
||||
}
|
||||
|
||||
type ConnectionInfo struct {
|
||||
ClientIP string
|
||||
ClientPort string
|
||||
ServerIP string
|
||||
ServerPort string
|
||||
IsOutgoing bool
|
||||
}
|
||||
|
||||
func (tid *tcpID) String() string {
|
||||
return fmt.Sprintf("%s->%s %s->%s", tid.srcIP, tid.dstIP, tid.srcPort, tid.dstPort)
|
||||
}
|
||||
|
||||
/* tcpReader gets reads from a channel of bytes of tcp payload, and parses it into requests and responses.
|
||||
* The payload is written to the channel by a tcpStream object that is dedicated to one tcp connection.
|
||||
* An tcpReader object is unidirectional: it parses either a client stream or a server stream.
|
||||
* Implements io.Reader interface (Read)
|
||||
*/
|
||||
type tcpReader struct {
|
||||
ident string
|
||||
tcpID *api.TcpID
|
||||
isClosed bool
|
||||
isClient bool
|
||||
isOutgoing bool
|
||||
msgQueue chan tcpReaderDataMsg // Channel of captured reassembled tcp payload
|
||||
data []byte
|
||||
superTimer *api.SuperTimer
|
||||
parent *tcpStream
|
||||
messageCount uint
|
||||
packetsSeen uint
|
||||
outboundLinkWriter *OutboundLinkWriter
|
||||
extension *api.Extension
|
||||
emitter api.Emitter
|
||||
counterPair *api.CounterPair
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
func (h *tcpReader) Read(p []byte) (int, error) {
|
||||
var msg tcpReaderDataMsg
|
||||
|
||||
ok := true
|
||||
for ok && len(h.data) == 0 {
|
||||
msg, ok = <-h.msgQueue
|
||||
h.data = msg.bytes
|
||||
|
||||
h.superTimer.CaptureTime = msg.timestamp
|
||||
if len(h.data) > 0 {
|
||||
h.packetsSeen += 1
|
||||
}
|
||||
if h.packetsSeen < checkTLSPacketAmount && len(msg.bytes) > 5 { // packets with less than 5 bytes cause tlsx to panic
|
||||
clientHello := tlsx.ClientHello{}
|
||||
err := clientHello.Unmarshall(msg.bytes)
|
||||
if err == nil {
|
||||
rlog.Debugf("Detected TLS client hello with SNI %s\n", clientHello.SNI)
|
||||
// TODO: Throws `panic: runtime error: invalid memory address or nil pointer dereference` error.
|
||||
// numericPort, _ := strconv.Atoi(h.tcpID.DstPort)
|
||||
// h.outboundLinkWriter.WriteOutboundLink(h.tcpID.SrcIP, h.tcpID.DstIP, numericPort, clientHello.SNI, TLSProtocol)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !ok || len(h.data) == 0 {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
l := copy(p, h.data)
|
||||
h.data = h.data[l:]
|
||||
return l, nil
|
||||
}
|
||||
|
||||
func (h *tcpReader) Close() {
|
||||
h.Lock()
|
||||
if !h.isClosed {
|
||||
h.isClosed = true
|
||||
close(h.msgQueue)
|
||||
}
|
||||
h.Unlock()
|
||||
}
|
||||
|
||||
func (h *tcpReader) run(wg *sync.WaitGroup) {
|
||||
defer wg.Done()
|
||||
b := bufio.NewReader(h)
|
||||
err := h.extension.Dissector.Dissect(b, h.isClient, h.tcpID, h.counterPair, h.superTimer, h.parent.superIdentifier, h.emitter)
|
||||
if err != nil {
|
||||
io.Copy(ioutil.Discard, b)
|
||||
}
|
||||
}
|
||||
@@ -2,32 +2,34 @@ package tap
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/google/gopacket"
|
||||
"github.com/google/gopacket/layers" // pulls in all layers decoders
|
||||
"github.com/google/gopacket/reassembly"
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
)
|
||||
|
||||
/* It's a connection (bidirectional)
|
||||
* Implements gopacket.reassembly.Stream interface (Accept, ReassembledSG, ReassemblyComplete)
|
||||
* ReassembledSG gets called when new reassembled data is ready (i.e. bytes in order, no duplicates, complete)
|
||||
* In our implementation, we pass information from ReassembledSG to the httpReader through a shared channel.
|
||||
* In our implementation, we pass information from ReassembledSG to the tcpReader through a shared channel.
|
||||
*/
|
||||
type tcpStream struct {
|
||||
tcpstate *reassembly.TCPSimpleFSM
|
||||
fsmerr bool
|
||||
optchecker reassembly.TCPOptionCheck
|
||||
net, transport gopacket.Flow
|
||||
isDNS bool
|
||||
isHTTP bool
|
||||
reversed bool
|
||||
client httpReader
|
||||
server httpReader
|
||||
urls []string
|
||||
ident string
|
||||
id int64
|
||||
isClosed bool
|
||||
superIdentifier *api.SuperIdentifier
|
||||
tcpstate *reassembly.TCPSimpleFSM
|
||||
fsmerr bool
|
||||
optchecker reassembly.TCPOptionCheck
|
||||
net, transport gopacket.Flow
|
||||
isDNS bool
|
||||
isTapTarget bool
|
||||
clients []tcpReader
|
||||
servers []tcpReader
|
||||
urls []string
|
||||
ident string
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
@@ -141,18 +143,30 @@ func (t *tcpStream) ReassembledSG(sg reassembly.ScatterGather, ac reassembly.Ass
|
||||
if len(data) > 2+int(dnsSize) {
|
||||
sg.KeepFrom(2 + int(dnsSize))
|
||||
}
|
||||
} else if t.isHTTP {
|
||||
} else if t.isTapTarget {
|
||||
if length > 0 {
|
||||
if *hexdump {
|
||||
Trace("Feeding http with:%s", hex.Dump(data))
|
||||
}
|
||||
// This is where we pass the reassembled information onwards
|
||||
// This channel is read by an httpReader object
|
||||
statsTracker.incReassembledTcpPayloadsCount()
|
||||
if dir == reassembly.TCPDirClientToServer && !t.reversed {
|
||||
t.client.msgQueue <- httpReaderDataMsg{data, ac.GetCaptureInfo().Timestamp}
|
||||
// This channel is read by an tcpReader object
|
||||
appStats.IncReassembledTcpPayloadsCount()
|
||||
timestamp := ac.GetCaptureInfo().Timestamp
|
||||
if dir == reassembly.TCPDirClientToServer {
|
||||
for i := range t.clients {
|
||||
reader := &t.clients[i]
|
||||
reader.Lock()
|
||||
if !reader.isClosed {
|
||||
reader.msgQueue <- tcpReaderDataMsg{data, timestamp}
|
||||
}
|
||||
reader.Unlock()
|
||||
}
|
||||
} else {
|
||||
t.server.msgQueue <- httpReaderDataMsg{data, ac.GetCaptureInfo().Timestamp}
|
||||
for i := range t.servers {
|
||||
reader := &t.servers[i]
|
||||
reader.Lock()
|
||||
if !reader.isClosed {
|
||||
reader.msgQueue <- tcpReaderDataMsg{data, timestamp}
|
||||
}
|
||||
reader.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -160,10 +174,33 @@ func (t *tcpStream) ReassembledSG(sg reassembly.ScatterGather, ac reassembly.Ass
|
||||
|
||||
func (t *tcpStream) ReassemblyComplete(ac reassembly.AssemblerContext) bool {
|
||||
Trace("%s: Connection closed", t.ident)
|
||||
if t.isHTTP {
|
||||
close(t.client.msgQueue)
|
||||
close(t.server.msgQueue)
|
||||
if t.isTapTarget && !t.isClosed {
|
||||
t.Close()
|
||||
}
|
||||
// do not remove the connection to allow last ACK
|
||||
return false
|
||||
}
|
||||
|
||||
func (t *tcpStream) Close() {
|
||||
shouldReturn := false
|
||||
t.Lock()
|
||||
if t.isClosed {
|
||||
shouldReturn = true
|
||||
} else {
|
||||
t.isClosed = true
|
||||
}
|
||||
t.Unlock()
|
||||
if shouldReturn {
|
||||
return
|
||||
}
|
||||
streams.Delete(t.id)
|
||||
|
||||
for i := range t.clients {
|
||||
reader := &t.clients[i]
|
||||
reader.Close()
|
||||
}
|
||||
for i := range t.servers {
|
||||
reader := &t.servers[i]
|
||||
reader.Close()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,8 +2,12 @@ package tap
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/romana/rlog"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/romana/rlog"
|
||||
"github.com/up9inc/mizu/tap/api"
|
||||
|
||||
"github.com/google/gopacket"
|
||||
"github.com/google/gopacket/layers" // pulls in all layers decoders
|
||||
@@ -17,72 +21,105 @@ import (
|
||||
*/
|
||||
type tcpStreamFactory struct {
|
||||
wg sync.WaitGroup
|
||||
doHTTP bool
|
||||
harWriter *HarWriter
|
||||
outbountLinkWriter *OutboundLinkWriter
|
||||
outboundLinkWriter *OutboundLinkWriter
|
||||
Emitter api.Emitter
|
||||
}
|
||||
|
||||
type tcpStreamWrapper struct {
|
||||
stream *tcpStream
|
||||
createdAt time.Time
|
||||
}
|
||||
|
||||
var streams *sync.Map = &sync.Map{} // global
|
||||
var streamId int64 = 0
|
||||
|
||||
var maxNumberOfGoroutines int
|
||||
|
||||
func (factory *tcpStreamFactory) New(net, transport gopacket.Flow, tcp *layers.TCP, ac reassembly.AssemblerContext) reassembly.Stream {
|
||||
rlog.Debugf("* NEW: %s %s", net, transport)
|
||||
fsmOptions := reassembly.TCPSimpleFSMOptions{
|
||||
SupportMissingEstablishment: *allowmissinginit,
|
||||
}
|
||||
rlog.Debugf("Current App Ports: %v", gSettings.filterPorts)
|
||||
srcIp := net.Src().String()
|
||||
dstIp := net.Dst().String()
|
||||
dstPort := int(tcp.DstPort)
|
||||
srcPort := transport.Src().String()
|
||||
dstPort := transport.Dst().String()
|
||||
|
||||
if factory.shouldNotifyOnOutboundLink(dstIp, dstPort) {
|
||||
factory.outbountLinkWriter.WriteOutboundLink(net.Src().String(), dstIp, dstPort, "", "")
|
||||
}
|
||||
props := factory.getStreamProps(srcIp, dstIp, dstPort)
|
||||
isHTTP := props.isTapTarget
|
||||
// if factory.shouldNotifyOnOutboundLink(dstIp, dstPort) {
|
||||
// factory.outboundLinkWriter.WriteOutboundLink(net.Src().String(), dstIp, dstPort, "", "")
|
||||
// }
|
||||
props := factory.getStreamProps(srcIp, srcPort, dstIp, dstPort)
|
||||
isTapTarget := props.isTapTarget
|
||||
stream := &tcpStream{
|
||||
net: net,
|
||||
transport: transport,
|
||||
isDNS: tcp.SrcPort == 53 || tcp.DstPort == 53,
|
||||
isHTTP: isHTTP && factory.doHTTP,
|
||||
reversed: tcp.SrcPort == 80,
|
||||
tcpstate: reassembly.NewTCPSimpleFSM(fsmOptions),
|
||||
ident: fmt.Sprintf("%s:%s", net, transport),
|
||||
optchecker: reassembly.NewTCPOptionCheck(),
|
||||
net: net,
|
||||
transport: transport,
|
||||
isDNS: tcp.SrcPort == 53 || tcp.DstPort == 53,
|
||||
isTapTarget: isTapTarget,
|
||||
tcpstate: reassembly.NewTCPSimpleFSM(fsmOptions),
|
||||
ident: fmt.Sprintf("%s:%s", net, transport),
|
||||
optchecker: reassembly.NewTCPOptionCheck(),
|
||||
superIdentifier: &api.SuperIdentifier{},
|
||||
}
|
||||
if stream.isHTTP {
|
||||
stream.client = httpReader{
|
||||
msgQueue: make(chan httpReaderDataMsg),
|
||||
ident: fmt.Sprintf("%s %s", net, transport),
|
||||
tcpID: tcpID{
|
||||
srcIP: net.Src().String(),
|
||||
dstIP: net.Dst().String(),
|
||||
srcPort: transport.Src().String(),
|
||||
dstPort: transport.Dst().String(),
|
||||
},
|
||||
hexdump: *hexdump,
|
||||
parent: stream,
|
||||
isClient: true,
|
||||
isOutgoing: props.isOutgoing,
|
||||
harWriter: factory.harWriter,
|
||||
outboundLinkWriter: factory.outbountLinkWriter,
|
||||
if stream.isTapTarget {
|
||||
if runtime.NumGoroutine() > maxNumberOfGoroutines {
|
||||
appStats.IncDroppedTcpStreams()
|
||||
rlog.Debugf("Dropped a TCP stream because of load. Total dropped: %d Total Goroutines: %d\n", appStats.DroppedTcpStreams, runtime.NumGoroutine())
|
||||
return stream
|
||||
}
|
||||
stream.server = httpReader{
|
||||
msgQueue: make(chan httpReaderDataMsg),
|
||||
ident: fmt.Sprintf("%s %s", net.Reverse(), transport.Reverse()),
|
||||
tcpID: tcpID{
|
||||
srcIP: net.Dst().String(),
|
||||
dstIP: net.Src().String(),
|
||||
srcPort: transport.Dst().String(),
|
||||
dstPort: transport.Src().String(),
|
||||
},
|
||||
hexdump: *hexdump,
|
||||
parent: stream,
|
||||
isOutgoing: props.isOutgoing,
|
||||
harWriter: factory.harWriter,
|
||||
outboundLinkWriter: factory.outbountLinkWriter,
|
||||
streamId++
|
||||
stream.id = streamId
|
||||
for i, extension := range extensions {
|
||||
counterPair := &api.CounterPair{
|
||||
Request: 0,
|
||||
Response: 0,
|
||||
}
|
||||
stream.clients = append(stream.clients, tcpReader{
|
||||
msgQueue: make(chan tcpReaderDataMsg),
|
||||
superTimer: &api.SuperTimer{},
|
||||
ident: fmt.Sprintf("%s %s", net, transport),
|
||||
tcpID: &api.TcpID{
|
||||
SrcIP: srcIp,
|
||||
DstIP: dstIp,
|
||||
SrcPort: srcPort,
|
||||
DstPort: dstPort,
|
||||
},
|
||||
parent: stream,
|
||||
isClient: true,
|
||||
isOutgoing: props.isOutgoing,
|
||||
outboundLinkWriter: factory.outboundLinkWriter,
|
||||
extension: extension,
|
||||
emitter: factory.Emitter,
|
||||
counterPair: counterPair,
|
||||
})
|
||||
stream.servers = append(stream.servers, tcpReader{
|
||||
msgQueue: make(chan tcpReaderDataMsg),
|
||||
superTimer: &api.SuperTimer{},
|
||||
ident: fmt.Sprintf("%s %s", net, transport),
|
||||
tcpID: &api.TcpID{
|
||||
SrcIP: net.Dst().String(),
|
||||
DstIP: net.Src().String(),
|
||||
SrcPort: transport.Dst().String(),
|
||||
DstPort: transport.Src().String(),
|
||||
},
|
||||
parent: stream,
|
||||
isClient: false,
|
||||
isOutgoing: props.isOutgoing,
|
||||
outboundLinkWriter: factory.outboundLinkWriter,
|
||||
extension: extension,
|
||||
emitter: factory.Emitter,
|
||||
counterPair: counterPair,
|
||||
})
|
||||
|
||||
streams.Store(stream.id, &tcpStreamWrapper{
|
||||
stream: stream,
|
||||
createdAt: time.Now(),
|
||||
})
|
||||
|
||||
factory.wg.Add(2)
|
||||
// Start reading from channel stream.reader.bytes
|
||||
go stream.clients[i].run(&factory.wg)
|
||||
go stream.servers[i].run(&factory.wg)
|
||||
}
|
||||
factory.wg.Add(2)
|
||||
// Start reading from channels stream.client.bytes and stream.server.bytes
|
||||
go stream.client.run(&factory.wg)
|
||||
go stream.server.run(&factory.wg)
|
||||
}
|
||||
return stream
|
||||
}
|
||||
@@ -91,34 +128,24 @@ func (factory *tcpStreamFactory) WaitGoRoutines() {
|
||||
factory.wg.Wait()
|
||||
}
|
||||
|
||||
func (factory *tcpStreamFactory) getStreamProps(srcIP string, dstIP string, dstPort int) *streamProps {
|
||||
func (factory *tcpStreamFactory) getStreamProps(srcIP string, srcPort string, dstIP string, dstPort string) *streamProps {
|
||||
if hostMode {
|
||||
if inArrayString(gSettings.filterAuthorities, fmt.Sprintf("%s:%d", dstIP, dstPort)) == true {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ host1 %s:%d", dstIP, dstPort))
|
||||
if inArrayString(gSettings.filterAuthorities, fmt.Sprintf("%s:%s", dstIP, dstPort)) {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ host1 %s:%s", dstIP, dstPort))
|
||||
return &streamProps{isTapTarget: true, isOutgoing: false}
|
||||
} else if inArrayString(gSettings.filterAuthorities, dstIP) == true {
|
||||
} else if inArrayString(gSettings.filterAuthorities, dstIP) {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ host2 %s", dstIP))
|
||||
return &streamProps{isTapTarget: true, isOutgoing: false}
|
||||
} else if *anydirection && inArrayString(gSettings.filterAuthorities, srcIP) == true {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ host3 %s", srcIP))
|
||||
} else if inArrayString(gSettings.filterAuthorities, fmt.Sprintf("%s:%s", srcIP, srcPort)) {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ host3 %s:%s", srcIP, srcPort))
|
||||
return &streamProps{isTapTarget: true, isOutgoing: true}
|
||||
} else if inArrayString(gSettings.filterAuthorities, srcIP) {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ host4 %s", srcIP))
|
||||
return &streamProps{isTapTarget: true, isOutgoing: true}
|
||||
}
|
||||
return &streamProps{isTapTarget: false}
|
||||
return &streamProps{isTapTarget: false, isOutgoing: false}
|
||||
} else {
|
||||
isTappedPort := dstPort == 80 || (gSettings.filterPorts != nil && (inArrayInt(gSettings.filterPorts, dstPort)))
|
||||
if !isTappedPort {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("- notHost1 %d", dstPort))
|
||||
return &streamProps{isTapTarget: false, isOutgoing: false}
|
||||
}
|
||||
|
||||
isOutgoing := !inArrayString(ownIps, dstIP)
|
||||
|
||||
if !*anydirection && isOutgoing {
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("- notHost2"))
|
||||
return &streamProps{isTapTarget: false, isOutgoing: isOutgoing}
|
||||
}
|
||||
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ notHost3 %s -> %s:%d", srcIP, dstIP, dstPort))
|
||||
rlog.Debugf("getStreamProps %s", fmt.Sprintf("+ notHost3 %s:%s -> %s:%s", srcIP, srcPort, dstIP, dstPort))
|
||||
return &streamProps{isTapTarget: true}
|
||||
}
|
||||
}
|
||||
|
||||
2
ui/.env.example
Normal file
2
ui/.env.example
Normal file
@@ -0,0 +1,2 @@
|
||||
REACT_APP_OVERRIDE_WS_URL="ws://localhost:8899/ws"
|
||||
REACT_APP_OVERRIDE_API_URL="http://localhost:8899/api/"
|
||||
22792
ui/package-lock.json
generated
22792
ui/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user