Compare commits

...

42 Commits

Author SHA1 Message Date
gadotroee
378270ee3d Change tests to use github repo (#1211)
remove goole related stuff
2022-07-26 15:23:37 +03:00
Nimrod Gilboa Markevich
692c500b0f Improve Go TLS address availability (#1207)
Fetch source and destination addresses with bpf from tcp kprobes, similar to how it is done for openssl lib.
Chunk contains both source address and destination address.
FD is no longer used to obtain addresses.
2022-07-19 14:31:27 +03:00
gadotroee
5525214d0a Fix memory of acceptanceTests minikube cluster (#1209) 2022-07-19 12:55:44 +03:00
RoyIsland
efd414a2ed Removed telemetry (#1208) 2022-07-19 12:29:48 +03:00
AmitUp9
b3e79ff244 Grooming Traffic Stats Modal - change font and time picker position (#1206)
* font change and time picker position update

* add font-family to variables scss
2022-07-17 17:19:44 +03:00
AmitUp9
d4b9fea5a7 fix elastic time picker ui css (#1204) 2022-07-14 11:11:05 +03:00
leon-up9
d11770681b full height (#1202)
Co-authored-by: Leon <>
2022-07-13 18:37:22 +03:00
gadotroee
e9719cba3a Add time range to stats (#1199) 2022-07-13 17:21:18 +03:00
leon-up9
15f7b889e2 height change (#1201)
Co-authored-by: Leon <>
2022-07-13 13:37:08 +03:00
RoyUP9
d98ac0e8f7 Removed redundant IgnoredUserAgents field (#1198) 2022-07-12 20:41:42 +03:00
gadotroee
a3c236ff0a Fix colors map initialization (#1200) 2022-07-12 20:05:21 +03:00
gadotroee
4b280ecd6d Hide Response tab if there is no response (#1197) 2022-07-12 18:38:39 +03:00
leon-up9
de554f5fb6 ui/ include scss files in common (#1195)
* include scss files

* exported color

Co-authored-by: Leon <>
2022-07-12 11:50:24 +03:00
RoyUP9
7c159fffc0 Added redact using insertion filter (#1196) 2022-07-12 10:19:24 +03:00
M. Mert Yıldıran
1f2f63d11b Implement AMQP request-response matcher (#1091)
* Implement the basis of AMQP request-response matching

* Fix `package.json`

* Add `ExchangeDeclareOk`

* Add `ConnectionCloseOk`

* Add `BasicConsumeOk`

* Add `QueueBindOk`

* Add `representEmptyResponse` and fix `BasicPublish` and `BasicDeliver`

* Fix ident and matcher, add `connectionOpen`, `channelOpen`, `connectionTune`, `basicCancel`

* Fix linter

* Fix the unit tests

* #run_acceptance_tests

* #run_acceptance_tests

* Fix the tests #run_acceptance_tests

* Log don't panic

* Don't skip AMQP acceptance tests #run_acceptance_tests

* Revert "Don't skip AMQP acceptance tests #run_acceptance_tests"

This reverts commit c60e9cf747.

* Remove `Details` section from `representEmpty`

* Add `This request or response has no data.` text
2022-07-11 17:33:25 +03:00
RoyUP9
e2544aea12 Remove duplication of Headers, Cookies and QueryString (#1192) 2022-07-11 13:16:22 +03:00
Nimrod Gilboa Markevich
57e60073f5 Generate bpf files before running tests (#1194) 2022-07-11 12:31:45 +03:00
Nimrod Gilboa Markevich
f220ad2f1a Delete ebpf object files (#1190)
Do not track object files in git.
Generate the files with `make bpf` or during `make agent`.
2022-07-11 12:08:20 +03:00
RoyUP9
5875ba0eb3 Fixed panic in socket cleanup (#1193) 2022-07-11 11:18:58 +03:00
leon-up9
9aaf3f1423 Ui/Download request replay (#1188)
* added icon

* download & upload

* button changes

* clean up

* changes

* pkj json

* img

* removed codeEditor options

* changes

Co-authored-by: Leon <>
2022-07-10 16:48:18 +03:00
Nimrod Gilboa Markevich
a2463b739a Improve tls info for openssl with kprobes (#1177)
Instead of going through the socket fd, addresses are obtained in kprobe/tcp_sendmsg on ssl write and kprobe/tcp_recvmsg on ssl read. The tcp kprobes and the openssl uprobes communicate through the id->sslInfo bpf map.
2022-07-07 19:11:54 +03:00
AmitUp9
c010d336bb add date to timeline ticks (#1191) 2022-07-07 14:09:41 +03:00
RoyUP9
710411e112 Replaced ProtocolId with Protocol Summary (#1189) 2022-07-07 12:05:59 +03:00
AmitUp9
274fbeb34a warning cleaning from console (#1187)
* warning cleaning from console

* code cleaning
2022-07-06 13:13:04 +03:00
leon-up9
38c05a6634 UI/feature flag for replay modal (#1186)
* context added

* import added

* chnages

* ui enabled

* moved to Consts

* changes to recoil

* change

* new useEffect

Co-authored-by: Leon <>
2022-07-06 11:19:46 +03:00
leon-up9
d857935889 move icon to right side (#1185)
Co-authored-by: Leon <>
2022-07-05 16:35:03 +03:00
gadotroee
ec11b21b51 Add available protocols to the stats endpoint (colors for methods are coming from server) (#1184)
* add protocols array to the endpoint

* no message

* no message

* fix tests and small fix for the iteration

* fix the color of the protocol

* Get protocols list and method colors from server

* fix tests

* cr fixes

Co-authored-by: Amit Fainholts <amit@up9.com>
2022-07-05 15:30:39 +03:00
M. Mert Yıldıran
52c9251c00 Add ABI0 support to Go crypto/tls eBPF tracer (#1169)
* Determine the Go ABI and get `goid` offset from DWARF

* Add `ABI` enum and morph the function according to the detected ABI

* Pass `goid` offset to an eBPF map to retrieve it in eBPF context

* Add `vmlinux.h` and implement `get_goid_from_thread_local_storage`

* Fix BPF verifier errors

* Update the comments

* Add `go_abi_0.h` and implement `ABI0` specific reads for `arm64`

* Upgrade `github.com/cilium/ebpf` to `v0.9.0`

* Add a comment

* Add macros for x86 specific parts

* Update `x86.o`

* Fix the map key type

* Add `user_pt_regs`

* Update arm64 object file

* Fix the version detection logic

* Add `getGStructOffset` method

* Define `goid_offsets`, `goid_offsets_map` structs and pass the offsets correctly

* Fix the `net.TCPConn` and buffer addresses for `ABI0`

* Remove comment

* Fix the issues for arm64 build

* Update x86.o

* Revert "Fix the issues for arm64 build"

This reverts commit 48b041b1b6.

* Revert `user_pt_regs`

* Add `vmlinux` directory

* Fix the `build.sh` and `Dockerfile`

* Add vmlinux_arm64.h

* Disable `get_goid_from_thread_local_storage` on ARM64 with a macro

* Update x86.o

* Update arm64.o

* x86

* arm64

* Fix the cross-compilation issue from x86 to arm64

* Fix the same thing for x86

* Use `BPF_CORE_READ` macro instead of `bpf_ringbuf_reserve` to support kernel versions older than 5.8

Also;
Add legacy version of thread_struct: thread_struct___v46
Build an additional object file for the kernel versions older than or equal to 4.6 and load them accordingly.
Add github.com/moby/moby

* Make #define directives more definitive

* Select the x86 and arm64 versions of `vmlinux.h` using macros

* Put `goid` offsets into the map before installing `uprobe`(s)

* arm64

* #run_acceptance_tests

* Remove a forgotten `fmt.Printf`

* Log the detected Linux kernel version
2022-07-05 14:35:30 +03:00
leon-up9
f3a6b3a9d4 with loading HOC (#1181)
* withLoading

* optional props

* LoadingWrapper

* pr comments

* changes

Co-authored-by: Leon <>
2022-07-05 12:23:47 +03:00
leon-up9
5f73c2d50a allow skipping formating error (#1183)
Co-authored-by: Leon <>
2022-07-04 16:24:23 +03:00
gadotroee
d6944d467c Merge traffic stats endpoints to one and add auto interval logic (#1174) 2022-07-04 10:01:49 +03:00
leon-up9
57078517a4 Ui/replay demo notes (#1180)
* close ws on open

* chech if json before parsing

* setting defualt tab reponse and missing dep

* remove redundant

* space

* PR fixes

* remove redundant

* changed order

* Revert "remove redundant"

This reverts commit 2f0bef5d33.

* revert order change

* changes

* change

* changes

Co-authored-by: Leon <>
2022-07-03 15:36:16 +03:00
AmitUp9
b4bc09637c fix select in traffic stats ui (#1179) 2022-07-03 11:48:38 +03:00
lirazyehezkel
302333b4ae TRA-4622 Remove rules feature UI (#1178)
* Removed policy rules (validation rules) feature

* updated test pcap

* Remove rules

* fix replay in rules

Co-authored-by: Roy Island <roy@up9.com>
Co-authored-by: RoyUP9 <87927115+RoyUP9@users.noreply.github.com>
Co-authored-by: Roee Gadot <roee.gadot@up9.com>
2022-07-03 11:32:23 +03:00
leon-up9
13ed8eb58a Ui/TRA-4607 - replay mizu requests (#1165)
* modal & keyValueTable added

* added pulse animation
KeyValueTable Behavior improved
CodeEditor addded

* style changed

* codeEditor styling
support query params

* send request data

* finally stop loading

* select width

* methods and requesr format

* icon changed & moved near request tab

* accordions added and response presented

* 2 way params biding

* remove redundant

* host path fixed

* fix path input

* icon styles

* fallback for format body

* refresh button

* changes

* remove redundant

* closing tag

* capitilized character

* PR comments

* removed props

* small changes

* color added to reponse data

Co-authored-by: Leon <>
2022-07-03 09:48:56 +03:00
David Levanon
48619b3e1c fix small regression with openssl (#1176) 2022-06-30 17:39:03 +03:00
AmitUp9
3b0b311e1e TRA_4623 - methods view on timeline stats with coordination to the pie stats (#1175)
* Add select protocol → when selected, the view will be on commands of that exact protocol

* CR fixes

* added const instead of free string

* remove redundant sass file
2022-06-30 13:35:08 +03:00
gadotroee
3a9236a381 Fix replay to be like the test (#1173)
Represent should get entry that Marshaled and UnMarshaled
2022-06-29 11:54:16 +03:00
AmitUp9
2e7fd34210 Move general functions from timeline stats to helpers utils (#1170)
* no message

* format document
2022-06-29 11:15:22 +03:00
gadotroee
01af6aa19c Add reply endpoint for http (#1168) 2022-06-28 18:39:23 +03:00
David Levanon
2bfae1baae allow to configure max live streams from mizu cli (#1172)
* allow to configure max live streams from mizu cli

* Update cli/cmd/tap.go

Co-authored-by: Nimrod Gilboa Markevich <59927337+nimrod-up9@users.noreply.github.com>

Co-authored-by: Nimrod Gilboa Markevich <59927337+nimrod-up9@users.noreply.github.com>
2022-06-28 14:41:47 +03:00
RoyUP9
2df9fb49db Changed entry protocol field from struct to unique string (#1167) 2022-06-26 17:08:09 +03:00
186 changed files with 346152 additions and 12862 deletions

View File

@@ -52,15 +52,3 @@ jobs:
- name: Test
run: make acceptance-test
- name: Slack notification on failure
uses: ravsamhq/notify-slack-action@v1
if: always()
with:
status: ${{ job.status }}
notification_title: 'Mizu {workflow} has {status_message}'
message_format: '{emoji} *{workflow}* {status_message} during <{run_url}|run>, after commit <{commit_url}|{commit_sha}> by ${{ github.event.head_commit.author.name }} <${{ github.event.head_commit.author.email }}> ```${{ github.event.head_commit.message }}```'
footer: 'Linked Repo <{repo_url}|{repo}>'
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

View File

@@ -85,94 +85,6 @@ jobs:
GIT_BRANCH=${{ steps.version_parameters.outputs.branch }}
COMMIT_HASH=${{ github.sha }}
gcp-registry:
name: Push Docker image to GCR
runs-on: ubuntu-latest
strategy:
max-parallel: 2
fail-fast: false
matrix:
target:
- amd64
- arm64v8
steps:
- name: Check out the repo
uses: actions/checkout@v2
- id: 'auth'
uses: 'google-github-actions/auth@v0'
with:
credentials_json: '${{ secrets.GCR_JSON_KEY }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud@v0'
- name: Determine versioning strategy
uses: haya14busa/action-cond@v1
id: condval
with:
cond: ${{ github.ref == 'refs/heads/main' }}
if_true: "stable"
if_false: "dev"
- name: Auto Increment Ver Action
uses: docker://igorgov/auto-inc-ver:v2.0.0
id: versioning
with:
mode: ${{ steps.condval.outputs.value }}
suffix: 'dev'
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Get version parameters
shell: bash
run: |
echo "##[set-output name=build_timestamp;]$(echo $(date +%s))"
echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: version_parameters
- name: Get base image name
shell: bash
run: echo "##[set-output name=image;]$(echo gcr.io/up9-docker-hub/mizu/${GITHUB_REF#refs/heads/})"
id: base_image_step
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: |
${{ steps.base_image_step.outputs.image }}
tags: |
type=raw,${{ steps.versioning.outputs.version }}
type=raw,value=latest,enable=${{ steps.condval.outputs.value == 'stable' }}
type=raw,value=dev-latest,enable=${{ steps.condval.outputs.value == 'dev' }}
flavor: |
latest=auto
prefix=
suffix=-${{ matrix.target }},onlatest=true
- name: Login to GCR
uses: docker/login-action@v1
with:
registry: gcr.io
username: _json_key
password: ${{ secrets.GCR_JSON_KEY }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
TARGETARCH=${{ matrix.target }}
VER=${{ steps.versioning.outputs.version }}
BUILD_TIMESTAMP=${{ steps.version_parameters.outputs.build_timestamp }}
GIT_BRANCH=${{ steps.version_parameters.outputs.branch }}
COMMIT_HASH=${{ github.sha }}
docker-manifest:
name: Create and Push a Docker Manifest
runs-on: ubuntu-latest
@@ -232,7 +144,7 @@ jobs:
cli:
name: Build the CLI and publish
runs-on: ubuntu-latest
needs: [docker-manifest, gcp-registry]
needs: [docker-manifest]
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
@@ -290,15 +202,3 @@ jobs:
tag: ${{ steps.versioning.outputs.version }}
prerelease: ${{ github.ref != 'refs/heads/main' }}
bodyFile: 'cli/bin/README.md'
- name: Slack notification on failure
uses: ravsamhq/notify-slack-action@v1
if: always()
with:
status: ${{ job.status }}
notification_title: 'Mizu enterprise {workflow} has {status_message}'
message_format: '{emoji} *{workflow}* {status_message} during <{run_url}|run>, after commit <{commit_url}|{commit_sha}> by ${{ github.event.head_commit.author.name }} <${{ github.event.head_commit.author.email }}> ```${{ github.event.head_commit.message }}```'
footer: 'Linked Repo <{repo_url}|{repo}>'
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

View File

@@ -32,6 +32,10 @@ jobs:
id: agent_modified_files
run: devops/check_modified_files.sh agent/
- name: Generate eBPF object files and go bindings
id: generate_ebpf
run: make bpf
- name: Go lint - agent
uses: golangci/golangci-lint-action@v2
if: steps.agent_modified_files.outputs.matched == 'true'

View File

@@ -40,6 +40,10 @@ jobs:
run: |
./devops/install-capstone.sh
- name: Generate eBPF object files and go bindings
id: generate_ebpf
run: make bpf
- name: Check CLI modified files
id: cli_modified_files
run: devops/check_modified_files.sh cli/

3
.gitignore vendored
View File

@@ -56,3 +56,6 @@ tap/extensions/*/expect
# Ignore *.log files
*.log
# Object files
*.o

View File

@@ -8,7 +8,7 @@ SHELL=/bin/bash
# HELP
# This will output the help for each task
# thanks to https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
.PHONY: help ui agent agent-debug cli tap docker
.PHONY: help ui agent agent-debug cli tap docker bpf clean-bpf
help: ## This help.
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@@ -20,6 +20,13 @@ TS_SUFFIX="$(shell date '+%s')"
GIT_BRANCH="$(shell git branch | grep \* | cut -d ' ' -f2 | tr '[:upper:]' '[:lower:]' | tr '/' '_')"
BUCKET_PATH=static.up9.io/mizu/$(GIT_BRANCH)
export VER?=0.0
ARCH=$(shell uname -m)
ifeq ($(ARCH),$(filter $(ARCH),aarch64 arm64))
BPF_O_ARCH_LABEL=arm64
else
BPF_O_ARCH_LABEL=x86
endif
BPF_O_FILES = tap/tlstapper/tlstapper46_bpfel_$(BPF_O_ARCH_LABEL).o tap/tlstapper/tlstapper_bpfel_$(BPF_O_ARCH_LABEL).o
ui: ## Build UI.
@(cd ui; npm i ; npm run build; )
@@ -31,11 +38,17 @@ cli: ## Build CLI.
cli-debug: ## Build CLI.
@echo "building cli"; cd cli && $(MAKE) build-debug
agent: ## Build agent.
agent: bpf ## Build agent.
@(echo "building mizu agent .." )
@(cd agent; go build -o build/mizuagent main.go)
@ls -l agent/build
bpf: $(BPF_O_FILES)
$(BPF_O_FILES): $(wildcard tap/tlstapper/bpf/**/*.[ch])
@(echo "building tlstapper bpf")
@(./tap/tlstapper/bpf-builder/build.sh)
agent-debug: ## Build agent for debug.
@(echo "building mizu agent for debug.." )
@(cd agent; go build -gcflags="all=-N -l" -o build/mizuagent main.go)
@@ -76,6 +89,9 @@ clean-cli: ## Clean CLI.
clean-docker: ## Run clean docker
@(echo "DOCKER cleanup - NOT IMPLEMENTED YET " )
clean-bpf:
@(rm $(BPF_O_FILES) ; echo "bpf cleanup done" )
test-lint: ## Run lint on all modules
cd agent && golangci-lint run
cd shared && golangci-lint run

View File

@@ -11,7 +11,6 @@ module.exports = defineConfig({
testUrl: 'http://localhost:8899/',
redactHeaderContent: 'User-Header[REDACTED]',
redactBodyContent: '{ "User": "[REDACTED]" }',
regexMaskingBodyContent: '[REDACTED]',
greenFilterColor: 'rgb(210, 250, 210)',
redFilterColor: 'rgb(250, 214, 220)',
bodyJsonClass: '.hljs',

View File

@@ -1,7 +0,0 @@
import {isValueExistsInElement} from "../testHelpers/TrafficHelper";
it('Loading Mizu', function () {
cy.visit(Cypress.env('testUrl'));
});
isValueExistsInElement(true, Cypress.env('regexMaskingBodyContent'), Cypress.env('bodyJsonClass'));

View File

@@ -18,7 +18,6 @@ require (
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/go-logr/logr v1.2.2 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.7 // indirect
github.com/google/gofuzz v1.2.0 // indirect
@@ -29,7 +28,6 @@ require (
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/up9inc/mizu/logger v0.0.0 // indirect
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
golang.org/x/sys v0.0.0-20220207234003-57398862261d // indirect

View File

@@ -206,7 +206,6 @@ github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zV
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang-jwt/jwt/v4 v4.2.0 h1:besgBTC8w8HjP6NzQdxwKH9Z5oQMZ24ThTrHp3cZ8eU=
github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=

View File

@@ -27,7 +27,7 @@ else
fi
echo "Starting minikube..."
minikube start --cpus 2 --memory 6946
minikube start --cpus 2 --memory 6000
echo "Creating mizu tests namespaces"
kubectl create namespace mizu-tests --dry-run=client -o yaml | kubectl apply -f -

View File

@@ -2,10 +2,8 @@ package acceptanceTests
import (
"archive/zip"
"bytes"
"fmt"
"io/ioutil"
"net/http"
"os/exec"
"path"
"strings"
@@ -343,7 +341,7 @@ func TestTapRedact(t *testing.T) {
tapNamespace := GetDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--redact")
tapCmdArgs = append(tapCmdArgs, "--redact", "--set", "tap.redact-patterns.request-headers=User-Header", "--set", "tap.redact-patterns.request-body=User")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
@@ -429,60 +427,6 @@ func TestTapNoRedact(t *testing.T) {
RunCypressTests(t, "npx cypress run --spec \"cypress/e2e/tests/NoRedact.js\"")
}
func TestTapRegexMasking(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := GetCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := GetDefaultTapCommandArgs()
tapNamespace := GetDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--redact")
tapCmdArgs = append(tapCmdArgs, "-r", "Mizu")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := CleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := GetApiServerUrl(DefaultApiServerPort)
if err := WaitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := GetProxyUrl(DefaultNamespaceName, DefaultServiceName)
for i := 0; i < DefaultEntriesCount; i++ {
response, requestErr := http.Post(fmt.Sprintf("%v/post", proxyUrl), "text/plain", bytes.NewBufferString("Mizu"))
if _, requestErr = ExecuteHttpRequest(response, requestErr); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
RunCypressTests(t, "npx cypress run --spec \"cypress/e2e/tests/RegexMasking.js\"")
}
func TestTapIgnoredUserAgents(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")

View File

@@ -215,12 +215,11 @@ func DeleteKubeFile(kubeContext string, namespace string, filename string) error
func getDefaultCommandArgs() []string {
agentImageValue := os.Getenv("MIZU_CI_IMAGE")
setFlag := "--set"
telemetry := "telemetry=false"
agentImage := fmt.Sprintf("agent-image=%s", agentImageValue)
imagePullPolicy := "image-pull-policy=IfNotPresent"
headless := "headless=true"
return []string{setFlag, telemetry, setFlag, agentImage, setFlag, imagePullPolicy, setFlag, headless}
return []string{setFlag, agentImage, setFlag, imagePullPolicy, setFlag, headless}
}
func GetDefaultTapCommandArgs() []string {

View File

@@ -30,7 +30,6 @@ require (
github.com/up9inc/mizu/tap/extensions/kafka v0.0.0
github.com/up9inc/mizu/tap/extensions/redis v0.0.0
github.com/wI2L/jsondiff v0.1.1
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0
k8s.io/api v0.23.3
k8s.io/apimachinery v0.23.3
k8s.io/client-go v0.23.3
@@ -49,10 +48,9 @@ require (
github.com/Masterminds/semver v1.5.0 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beevik/etree v1.1.0 // indirect
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 // indirect
github.com/chanced/dynamic v0.0.0-20211210164248-f8fadb1d735b // indirect
github.com/cilium/ebpf v0.8.1 // indirect
github.com/cilium/ebpf v0.9.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
@@ -90,6 +88,7 @@ require (
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/mertyildiran/gqlparser/v2 v2.4.6 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/moby/moby v20.10.17+incompatible // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
@@ -105,6 +104,7 @@ require (
github.com/santhosh-tekuri/jsonschema/v5 v5.0.0 // indirect
github.com/segmentio/kafka-go v0.4.27 // indirect
github.com/shirou/gopsutil v3.21.11+incompatible // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/spf13/cobra v1.3.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/struCoder/pidusage v0.2.1 // indirect

View File

@@ -101,7 +101,6 @@ github.com/armon/go-metrics v0.3.10/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs=
github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
@@ -128,8 +127,8 @@ github.com/chanced/openapi v0.0.8/go.mod h1:SxE2VMLPw+T7Vq8nwbVVhDF2PigvRF4n5Xyq
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.8.1 h1:bLSSEbBLqGPXxls55pGr5qWZaTqcmfDJHhou7t254ao=
github.com/cilium/ebpf v0.8.1/go.mod h1:f5zLIM0FSNuAkSyLAN7X+Hy6yznlF1mNiWUMfxMtrgk=
github.com/cilium/ebpf v0.9.0 h1:ldiV+FscPCQ/p3mNEV4O02EPbUZJFsoEtHvIr9xLTvk=
github.com/cilium/ebpf v0.9.0/go.mod h1:+OhNOIXx/Fnu1IE8bJz2dzOA+VSfyTfdNUVdlQnxUFY=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
@@ -517,6 +516,8 @@ github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:F
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/moby v20.10.17+incompatible h1:TJJfyk2fLEgK+RzqVpFNkDkm0oEi+MLUfwt9lEYnp5g=
github.com/moby/moby v20.10.17+incompatible/go.mod h1:fDXVQ6+S340veQPv35CzDahGBmHsiclFwfEygB/TWMc=
github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
@@ -629,6 +630,8 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeV
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
@@ -707,8 +710,6 @@ github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6Ut
github.com/xlab/treeprint v1.1.0 h1:G/1DjNkPpfZCFt9CSh6b5/nY4VimlbHF3Rh4obvtzDk=
github.com/xlab/treeprint v1.1.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0 h1:6fRhSjgLCkTD3JnJxvaJ4Sj+TYblw757bqYgZaOq5ZY=
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmvncFJFHJ7Gvn9wZArjbV5/FppcK2fKk/tI=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
@@ -1251,8 +1252,9 @@ gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
gotest.tools/v3 v3.3.0 h1:MfDY1b1/0xN1CyMlQDac0ziEy9zJQd9CXBRRDHw2jJo=
gotest.tools/v3 v3.3.0/go.mod h1:Mcr9QNxkg0uMvy/YElmo4SpXgJKWgQvYrT7Kw5RzJ1A=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@@ -131,6 +131,7 @@ func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) *gin.Engin
routes.MetadataRoutes(ginApp)
routes.StatusRoutes(ginApp)
routes.DbRoutes(ginApp)
routes.ReplayRoutes(ginApp)
return ginApp
}
@@ -155,7 +156,7 @@ func runInTapperMode() {
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
tapOpts := &tap.TapOpts{
HostMode: hostMode,
HostMode: hostMode,
}
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)

View File

@@ -22,7 +22,16 @@ func (e *DefaultEntryStreamerSocketConnector) SendEntry(socketId int, entry *tap
if params.EnableFullEntries {
message, _ = models.CreateFullEntryWebSocketMessage(entry)
} else {
extension := extensionsMap[entry.Protocol.Name]
protocol, ok := protocolsMap[entry.Protocol.ToString()]
if !ok {
return fmt.Errorf("protocol not found, protocol: %v", protocol)
}
extension, ok := extensionsMap[protocol.Name]
if !ok {
return fmt.Errorf("extension not found, extension: %v", protocol.Name)
}
base := extension.Dissector.Summarize(entry)
message, _ = models.CreateBaseEntryWebSocketMessage(base)
}

View File

@@ -12,7 +12,6 @@ import (
"time"
"github.com/up9inc/mizu/agent/pkg/dependency"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/oas"
"github.com/up9inc/mizu/agent/pkg/servicemap"
@@ -101,20 +100,13 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
for item := range outputItems {
extension := extensionsMap[item.Protocol.Name]
resolvedSource, resolvedDestionation, namespace := resolveIP(item.ConnectionInfo)
resolvedSource, resolvedDestination, namespace := resolveIP(item.ConnectionInfo)
if namespace == "" && item.Namespace != tapApi.UnknownNamespace {
namespace = item.Namespace
}
mizuEntry := extension.Dissector.Analyze(item, resolvedSource, resolvedDestionation, namespace)
if extension.Protocol.Name == "http" {
harEntry, err := har.NewEntry(mizuEntry.Request, mizuEntry.Response, mizuEntry.StartTime, mizuEntry.ElapsedTime)
if err == nil {
rules, _, _ := models.RunValidationRulesState(*harEntry, mizuEntry.Destination.Name)
mizuEntry.Rules = rules
}
}
mizuEntry := extension.Dissector.Analyze(item, resolvedSource, resolvedDestination, namespace)
data, err := json.Marshal(mizuEntry)
if err != nil {

View File

@@ -14,10 +14,14 @@ import (
tapApi "github.com/up9inc/mizu/tap/api"
)
var extensionsMap map[string]*tapApi.Extension // global
var (
extensionsMap map[string]*tapApi.Extension // global
protocolsMap map[string]*tapApi.Protocol //global
)
func InitExtensionsMap(ref map[string]*tapApi.Extension) {
extensionsMap = ref
func InitMaps(extensions map[string]*tapApi.Extension, protocols map[string]*tapApi.Protocol) {
extensionsMap = extensions
protocolsMap = protocols
}
type EventHandlers interface {
@@ -93,7 +97,9 @@ func websocketHandler(c *gin.Context, eventHandlers EventHandlers, isTapper bool
websocketIdsLock.Unlock()
defer func() {
socketCleanup(socketId, connectedWebsockets[socketId])
if socketConnection := connectedWebsockets[socketId]; socketConnection != nil {
socketCleanup(socketId, socketConnection)
}
}()
eventHandlers.WebSocketConnect(c, socketId, isTapper)

View File

@@ -9,10 +9,11 @@ import (
"github.com/op/go-logging"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/api"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/tap/dbgctl"
tapApi "github.com/up9inc/mizu/tap/api"
"github.com/up9inc/mizu/tap/dbgctl"
amqpExt "github.com/up9inc/mizu/tap/extensions/amqp"
httpExt "github.com/up9inc/mizu/tap/extensions/http"
kafkaExt "github.com/up9inc/mizu/tap/extensions/kafka"
@@ -22,11 +23,13 @@ import (
var (
Extensions []*tapApi.Extension // global
ExtensionsMap map[string]*tapApi.Extension // global
ProtocolsMap map[string]*tapApi.Protocol //global
)
func LoadExtensions() {
Extensions = make([]*tapApi.Extension, 0)
ExtensionsMap = make(map[string]*tapApi.Extension)
ProtocolsMap = make(map[string]*tapApi.Protocol)
extensionHttp := &tapApi.Extension{}
dissectorHttp := httpExt.NewDissector()
@@ -34,6 +37,10 @@ func LoadExtensions() {
extensionHttp.Dissector = dissectorHttp
Extensions = append(Extensions, extensionHttp)
ExtensionsMap[extensionHttp.Protocol.Name] = extensionHttp
protocolsHttp := dissectorHttp.GetProtocols()
for k, v := range protocolsHttp {
ProtocolsMap[k] = v
}
if !dbgctl.MizuTapperDisableNonHttpExtensions {
extensionAmqp := &tapApi.Extension{}
@@ -42,6 +49,10 @@ func LoadExtensions() {
extensionAmqp.Dissector = dissectorAmqp
Extensions = append(Extensions, extensionAmqp)
ExtensionsMap[extensionAmqp.Protocol.Name] = extensionAmqp
protocolsAmqp := dissectorAmqp.GetProtocols()
for k, v := range protocolsAmqp {
ProtocolsMap[k] = v
}
extensionKafka := &tapApi.Extension{}
dissectorKafka := kafkaExt.NewDissector()
@@ -49,6 +60,10 @@ func LoadExtensions() {
extensionKafka.Dissector = dissectorKafka
Extensions = append(Extensions, extensionKafka)
ExtensionsMap[extensionKafka.Protocol.Name] = extensionKafka
protocolsKafka := dissectorKafka.GetProtocols()
for k, v := range protocolsKafka {
ProtocolsMap[k] = v
}
extensionRedis := &tapApi.Extension{}
dissectorRedis := redisExt.NewDissector()
@@ -56,13 +71,18 @@ func LoadExtensions() {
extensionRedis.Dissector = dissectorRedis
Extensions = append(Extensions, extensionRedis)
ExtensionsMap[extensionRedis.Protocol.Name] = extensionRedis
protocolsRedis := dissectorRedis.GetProtocols()
for k, v := range protocolsRedis {
ProtocolsMap[k] = v
}
}
sort.Slice(Extensions, func(i, j int) bool {
return Extensions[i].Protocol.Priority < Extensions[j].Protocol.Priority
})
api.InitExtensionsMap(ExtensionsMap)
api.InitMaps(ExtensionsMap, ProtocolsMap)
providers.InitProtocolToColor(ProtocolsMap)
}
func ConfigureBasenineServer(host string, port string, dbSize int64, logLevel logging.Level, insertionFilter string) {

View File

@@ -0,0 +1,34 @@
package controllers
import (
"net/http"
"time"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/replay"
"github.com/up9inc/mizu/agent/pkg/validation"
"github.com/up9inc/mizu/logger"
)
const (
replayTimeout = 10 * time.Second
)
func ReplayRequest(c *gin.Context) {
logger.Log.Debug("Starting replay")
replayDetails := &replay.Details{}
if err := c.Bind(replayDetails); err != nil {
c.JSON(http.StatusBadRequest, err)
return
}
logger.Log.Debugf("Validating replay, %v", replayDetails)
if err := validation.Validate(replayDetails); err != nil {
c.JSON(http.StatusBadRequest, err)
return
}
logger.Log.Debug("Executing replay, %v", replayDetails)
result := replay.ExecuteRequest(replayDetails, replayTimeout)
c.JSON(http.StatusOK, result)
}

View File

@@ -36,11 +36,13 @@ var (
)
var ProtocolHttp = &tapApi.Protocol{
Name: "http",
ProtocolSummary: tapApi.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,

View File

@@ -1,7 +1,10 @@
package controllers
import (
"fmt"
"net/http"
"strconv"
"time"
core "k8s.io/api/core/v1"
@@ -79,13 +82,25 @@ func GetGeneralStats(c *gin.Context) {
c.JSON(http.StatusOK, providers.GetGeneralStats())
}
func GetAccumulativeStats(c *gin.Context) {
c.JSON(http.StatusOK, providers.GetAccumulativeStats())
func GetTrafficStats(c *gin.Context) {
startTime, endTime, err := getStartEndTime(c)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, providers.GetTrafficStats(startTime, endTime))
}
func GetAccumulativeStatsTiming(c *gin.Context) {
// for now hardcoded 10 bars of 5 minutes interval
c.JSON(http.StatusOK, providers.GetAccumulativeStatsTiming(300, 10))
func getStartEndTime(c *gin.Context) (time.Time, time.Time, error) {
startTimeValue, err := strconv.Atoi(c.Query("startTimeMs"))
if err != nil {
return time.UnixMilli(0), time.UnixMilli(0), fmt.Errorf("invalid start time: %v", err)
}
endTimeValue, err := strconv.Atoi(c.Query("endTimeMs"))
if err != nil {
return time.UnixMilli(0), time.UnixMilli(0), fmt.Errorf("invalid end time: %v", err)
}
return time.UnixMilli(int64(startTimeValue)), time.UnixMilli(int64(endTimeValue)), nil
}
func GetCurrentResolvingInformation(c *gin.Context) {

View File

@@ -3,11 +3,11 @@ package entries
import (
"encoding/json"
"errors"
"fmt"
"time"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/app"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/shared"
@@ -38,11 +38,20 @@ func (e *BasenineEntriesProvider) GetEntries(entriesRequest *models.EntriesReque
return nil, nil, err
}
extension := app.ExtensionsMap[entry.Protocol.Name]
protocol, ok := app.ProtocolsMap[entry.Protocol.ToString()]
if !ok {
return nil, nil, fmt.Errorf("protocol not found, protocol: %v", protocol)
}
extension, ok := app.ExtensionsMap[protocol.Name]
if !ok {
return nil, nil, fmt.Errorf("extension not found, extension: %v", protocol.Name)
}
base := extension.Dissector.Summarize(entry)
dataSlice = append(dataSlice, &tapApi.EntryWrapper{
Protocol: entry.Protocol,
Protocol: *protocol,
Data: entry,
Base: base,
})
@@ -68,7 +77,16 @@ func (e *BasenineEntriesProvider) GetEntry(singleEntryRequest *models.SingleEntr
return nil, errors.New(string(bytes))
}
extension := app.ExtensionsMap[entry.Protocol.Name]
protocol, ok := app.ProtocolsMap[entry.Protocol.ToString()]
if !ok {
return nil, fmt.Errorf("protocol not found, protocol: %v", protocol)
}
extension, ok := app.ExtensionsMap[protocol.Name]
if !ok {
return nil, fmt.Errorf("extension not found, extension: %v", protocol.Name)
}
base := extension.Dissector.Summarize(entry)
var representation []byte
representation, err = extension.Dissector.Represent(entry.Request, entry.Response)
@@ -76,24 +94,10 @@ func (e *BasenineEntriesProvider) GetEntry(singleEntryRequest *models.SingleEntr
return nil, err
}
var rules []map[string]interface{}
var isRulesEnabled bool
if entry.Protocol.Name == "http" {
harEntry, _ := har.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
_, rulesMatched, _isRulesEnabled := models.RunValidationRulesState(*harEntry, entry.Destination.Name)
isRulesEnabled = _isRulesEnabled
inrec, _ := json.Marshal(rulesMatched)
if err := json.Unmarshal(inrec, &rules); err != nil {
logger.Log.Error(err)
}
}
return &tapApi.EntryWrapper{
Protocol: entry.Protocol,
Protocol: *protocol,
Representation: string(representation),
Data: entry,
Base: base,
Rules: rules,
IsRulesEnabled: isRulesEnabled,
}, nil
}

View File

@@ -11,75 +11,30 @@ import (
"github.com/up9inc/mizu/logger"
)
// Keep it because we might want cookies in the future
//func BuildCookies(rawCookies []interface{}) []har.Cookie {
// cookies := make([]har.Cookie, 0, len(rawCookies))
//
// for _, cookie := range rawCookies {
// c := cookie.(map[string]interface{})
// expiresStr := ""
// if c["expires"] != nil {
// expiresStr = c["expires"].(string)
// }
// expires, _ := time.Parse(time.RFC3339, expiresStr)
// httpOnly := false
// if c["httponly"] != nil {
// httpOnly, _ = strconv.ParseBool(c["httponly"].(string))
// }
// secure := false
// if c["secure"] != nil {
// secure, _ = strconv.ParseBool(c["secure"].(string))
// }
// path := ""
// if c["path"] != nil {
// path = c["path"].(string)
// }
// domain := ""
// if c["domain"] != nil {
// domain = c["domain"].(string)
// }
//
// cookies = append(cookies, har.Cookie{
// Name: c["name"].(string),
// Value: c["value"].(string),
// Path: path,
// Domain: domain,
// HTTPOnly: httpOnly,
// Secure: secure,
// Expires: expires,
// Expires8601: expiresStr,
// })
// }
//
// return cookies
//}
func BuildHeaders(rawHeaders []interface{}) ([]Header, string, string, string, string, string) {
func BuildHeaders(rawHeaders map[string]interface{}) ([]Header, string, string, string, string, string) {
var host, scheme, authority, path, status string
headers := make([]Header, 0, len(rawHeaders))
for _, header := range rawHeaders {
h := header.(map[string]interface{})
for key, value := range rawHeaders {
headers = append(headers, Header{
Name: h["name"].(string),
Value: h["value"].(string),
Name: key,
Value: value.(string),
})
if h["name"] == "Host" {
host = h["value"].(string)
if key == "Host" {
host = value.(string)
}
if h["name"] == ":authority" {
authority = h["value"].(string)
if key == ":authority" {
authority = value.(string)
}
if h["name"] == ":scheme" {
scheme = h["value"].(string)
if key == ":scheme" {
scheme = value.(string)
}
if h["name"] == ":path" {
path = h["value"].(string)
if key == ":path" {
path = value.(string)
}
if h["name"] == ":status" {
status = h["value"].(string)
if key == ":status" {
status = value.(string)
}
}
@@ -119,8 +74,8 @@ func BuildPostParams(rawParams []interface{}) []Param {
}
func NewRequest(request map[string]interface{}) (harRequest *Request, err error) {
headers, host, scheme, authority, path, _ := BuildHeaders(request["_headers"].([]interface{}))
cookies := make([]Cookie, 0) // BuildCookies(request["_cookies"].([]interface{}))
headers, host, scheme, authority, path, _ := BuildHeaders(request["headers"].(map[string]interface{}))
cookies := make([]Cookie, 0)
postData, _ := request["postData"].(map[string]interface{})
mimeType := postData["mimeType"]
@@ -134,12 +89,20 @@ func NewRequest(request map[string]interface{}) (harRequest *Request, err error)
}
queryString := make([]QueryString, 0)
for _, _qs := range request["_queryString"].([]interface{}) {
qs := _qs.(map[string]interface{})
queryString = append(queryString, QueryString{
Name: qs["name"].(string),
Value: qs["value"].(string),
})
for key, value := range request["queryString"].(map[string]interface{}) {
if valuesInterface, ok := value.([]interface{}); ok {
for _, valueInterface := range valuesInterface {
queryString = append(queryString, QueryString{
Name: key,
Value: valueInterface.(string),
})
}
} else {
queryString = append(queryString, QueryString{
Name: key,
Value: value.(string),
})
}
}
url := fmt.Sprintf("http://%s%s", host, request["url"].(string))
@@ -172,8 +135,8 @@ func NewRequest(request map[string]interface{}) (harRequest *Request, err error)
}
func NewResponse(response map[string]interface{}) (harResponse *Response, err error) {
headers, _, _, _, _, _status := BuildHeaders(response["_headers"].([]interface{}))
cookies := make([]Cookie, 0) // BuildCookies(response["_cookies"].([]interface{}))
headers, _, _, _, _, _status := BuildHeaders(response["headers"].(map[string]interface{}))
cookies := make([]Cookie, 0)
content, _ := response["content"].(map[string]interface{})
mimeType := content["mimeType"]

View File

@@ -4,7 +4,6 @@ import (
"encoding/json"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/rules"
tapApi "github.com/up9inc/mizu/tap/api"
basenine "github.com/up9inc/basenine/client/go"
@@ -143,9 +142,3 @@ type ExtendedCreator struct {
*har.Creator
Source *string `json:"_source"`
}
func RunValidationRulesState(harEntry har.Entry, service string) (tapApi.ApplicableRules, []rules.RulesMatched, bool) {
resultPolicyToSend, isEnabled := rules.MatchRequestPolicy(harEntry, service)
statusPolicyToSend, latency, numberOfRules := rules.PassedValidationRules(resultPolicyToSend)
return tapApi.ApplicableRules{Status: statusPolicyToSend, Latency: latency, NumberOfRules: numberOfRules}, resultPolicyToSend, isEnabled
}

View File

@@ -6,9 +6,8 @@ import (
"sync"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/tap/api"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/tap/api"
)
var (

View File

@@ -1,6 +1,9 @@
package providers
import (
"crypto/md5"
"encoding/hex"
"fmt"
"reflect"
"sync"
"time"
@@ -26,7 +29,6 @@ type TimeFrameStatsValue struct {
type ProtocolStats struct {
MethodsStats map[string]*SizeAndEntriesCount `json:"methods"`
Color string `json:"color"`
}
type SizeAndEntriesCount struct {
@@ -36,13 +38,13 @@ type SizeAndEntriesCount struct {
type AccumulativeStatsCounter struct {
Name string `json:"name"`
Color string `json:"color"`
EntriesCount int `json:"entriesCount"`
VolumeSizeBytes int `json:"volumeSizeBytes"`
}
type AccumulativeStatsProtocol struct {
AccumulativeStatsCounter
Color string `json:"color"`
Methods []*AccumulativeStatsCounter `json:"methods"`
}
@@ -51,46 +53,46 @@ type AccumulativeStatsProtocolTime struct {
Time int64 `json:"timestamp"`
}
type TrafficStatsResponse struct {
Protocols []string `json:"protocols"`
PieStats []*AccumulativeStatsProtocol `json:"pie"`
TimelineStats []*AccumulativeStatsProtocolTime `json:"timeline"`
}
var (
generalStats = GeneralStats{}
bucketsStats = BucketStats{}
bucketStatsLocker = sync.Mutex{}
protocolToColor = map[string]string{}
)
const (
InternalBucketThreshold = time.Minute * 1
MaxNumberOfBars = 30
)
func ResetGeneralStats() {
generalStats = GeneralStats{}
}
func GetGeneralStats() GeneralStats {
return generalStats
func GetGeneralStats() *GeneralStats {
return &generalStats
}
func GetAccumulativeStats() []*AccumulativeStatsProtocol {
bucketStatsCopy := getBucketStatsCopy()
if len(bucketStatsCopy) == 0 {
return make([]*AccumulativeStatsProtocol, 0)
func InitProtocolToColor(protocolMap map[string]*api.Protocol) {
for item, value := range protocolMap {
protocolToColor[api.GetProtocolSummary(item).Abbreviation] = value.BackgroundColor
}
methodsPerProtocolAggregated, protocolToColor := getAggregatedStatsAllTime(bucketStatsCopy)
return convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated, protocolToColor)
}
func GetAccumulativeStatsTiming(intervalSeconds int, numberOfBars int) []*AccumulativeStatsProtocolTime {
bucketStatsCopy := getBucketStatsCopy()
if len(bucketStatsCopy) == 0 {
return make([]*AccumulativeStatsProtocolTime, 0)
func GetTrafficStats(startTime time.Time, endTime time.Time) *TrafficStatsResponse {
bucketsStatsCopy := getFilteredBucketStatsCopy(startTime, endTime)
return &TrafficStatsResponse{
Protocols: getAvailableProtocols(bucketsStatsCopy),
PieStats: getAccumulativeStats(bucketsStatsCopy),
TimelineStats: getAccumulativeStatsTiming(bucketsStatsCopy),
}
firstBucketTime := getFirstBucketTime(time.Now().UTC(), intervalSeconds, numberOfBars)
methodsPerProtocolPerTimeAggregated, protocolToColor := getAggregatedResultTimingFromSpecificTime(intervalSeconds, bucketStatsCopy, firstBucketTime)
return convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated, protocolToColor)
}
func EntryAdded(size int, summery *api.BaseEntry) {
@@ -108,6 +110,65 @@ func EntryAdded(size int, summery *api.BaseEntry) {
generalStats.LastEntryTimestamp = currentTimestamp
}
func calculateInterval(firstTimestamp int64, lastTimestamp int64) time.Duration {
validDurations := []time.Duration{
time.Minute,
time.Minute * 2,
time.Minute * 3,
time.Minute * 5,
time.Minute * 10,
time.Minute * 15,
time.Minute * 20,
time.Minute * 30,
time.Minute * 45,
time.Minute * 60,
time.Minute * 75,
time.Minute * 90, // 1.5 minutes
time.Minute * 120, // 2 hours
time.Minute * 150, // 2.5 hours
time.Minute * 180, // 3 hours
time.Minute * 240, // 4 hours
time.Minute * 300, // 5 hours
time.Minute * 360, // 6 hours
time.Minute * 420, // 7 hours
time.Minute * 480, // 8 hours
time.Minute * 540, // 9 hours
time.Minute * 600, // 10 hours
time.Minute * 660, // 11 hours
time.Minute * 720, // 12 hours
time.Minute * 1440, // 24 hours
}
duration := time.Duration(lastTimestamp-firstTimestamp) * time.Second / time.Duration(MaxNumberOfBars)
for _, validDuration := range validDurations {
if validDuration-duration >= 0 {
return validDuration
}
}
return duration.Round(validDurations[len(validDurations)-1])
}
func getAccumulativeStats(stats BucketStats) []*AccumulativeStatsProtocol {
if len(stats) == 0 {
return make([]*AccumulativeStatsProtocol, 0)
}
methodsPerProtocolAggregated := getAggregatedStats(stats)
return convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated)
}
func getAccumulativeStatsTiming(stats BucketStats) []*AccumulativeStatsProtocolTime {
if len(stats) == 0 {
return make([]*AccumulativeStatsProtocolTime, 0)
}
interval := calculateInterval(stats[0].BucketTime.Unix(), stats[len(stats)-1].BucketTime.Unix()) // in seconds
methodsPerProtocolPerTimeAggregated := getAggregatedResultTiming(stats, interval)
return convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated)
}
func addToBucketStats(size int, summery *api.BaseEntry) {
entryTimeBucketRounded := getBucketFromTimeStamp(summery.Timestamp)
@@ -128,7 +189,6 @@ func addToBucketStats(size int, summery *api.BaseEntry) {
if _, found := bucketOfEntry.ProtocolStats[summery.Protocol.Abbreviation]; !found {
bucketOfEntry.ProtocolStats[summery.Protocol.Abbreviation] = ProtocolStats{
MethodsStats: map[string]*SizeAndEntriesCount{},
Color: summery.Protocol.BackgroundColor,
}
}
if _, found := bucketOfEntry.ProtocolStats[summery.Protocol.Abbreviation].MethodsStats[summery.Method]; !found {
@@ -147,21 +207,15 @@ func getBucketFromTimeStamp(timestamp int64) time.Time {
return entryTimeStampAsTime.Add(-1 * InternalBucketThreshold / 2).Round(InternalBucketThreshold)
}
func getFirstBucketTime(endTime time.Time, intervalSeconds int, numberOfBars int) time.Time {
lastBucketTime := endTime.Add(-1 * time.Second * time.Duration(intervalSeconds) / 2).Round(time.Second * time.Duration(intervalSeconds))
firstBucketTime := lastBucketTime.Add(-1 * time.Second * time.Duration(intervalSeconds*(numberOfBars-1)))
return firstBucketTime
}
func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated map[time.Time]map[string]map[string]*AccumulativeStatsCounter, protocolToColor map[string]string) []*AccumulativeStatsProtocolTime {
func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated map[time.Time]map[string]map[string]*AccumulativeStatsCounter) []*AccumulativeStatsProtocolTime {
finalResult := make([]*AccumulativeStatsProtocolTime, 0)
for timeKey, item := range methodsPerProtocolPerTimeAggregated {
protocolsData := make([]*AccumulativeStatsProtocol, 0)
for protocolName := range item {
for protocolName, value := range item {
entriesCount := 0
volumeSizeBytes := 0
methods := make([]*AccumulativeStatsCounter, 0)
for _, methodAccData := range methodsPerProtocolPerTimeAggregated[timeKey][protocolName] {
for _, methodAccData := range value {
entriesCount += methodAccData.EntriesCount
volumeSizeBytes += methodAccData.VolumeSizeBytes
methods = append(methods, methodAccData)
@@ -169,10 +223,10 @@ func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggreg
protocolsData = append(protocolsData, &AccumulativeStatsProtocol{
AccumulativeStatsCounter: AccumulativeStatsCounter{
Name: protocolName,
Color: protocolToColor[protocolName],
EntriesCount: entriesCount,
VolumeSizeBytes: volumeSizeBytes,
},
Color: protocolToColor[protocolName],
Methods: methods,
})
}
@@ -184,7 +238,7 @@ func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggreg
return finalResult
}
func convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated map[string]map[string]*AccumulativeStatsCounter, protocolToColor map[string]string) []*AccumulativeStatsProtocol {
func convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated map[string]map[string]*AccumulativeStatsCounter) []*AccumulativeStatsProtocol {
protocolsData := make([]*AccumulativeStatsProtocol, 0)
for protocolName, value := range methodsPerProtocolAggregated {
entriesCount := 0
@@ -198,17 +252,17 @@ func convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated map[string
protocolsData = append(protocolsData, &AccumulativeStatsProtocol{
AccumulativeStatsCounter: AccumulativeStatsCounter{
Name: protocolName,
Color: protocolToColor[protocolName],
EntriesCount: entriesCount,
VolumeSizeBytes: volumeSizeBytes,
},
Color: protocolToColor[protocolName],
Methods: methods,
})
}
return protocolsData
}
func getBucketStatsCopy() BucketStats {
func getFilteredBucketStatsCopy(startTime time.Time, endTime time.Time) BucketStats {
bucketStatsCopy := BucketStats{}
bucketStatsLocker.Lock()
if err := copier.Copy(&bucketStatsCopy, bucketsStats); err != nil {
@@ -216,58 +270,59 @@ func getBucketStatsCopy() BucketStats {
return nil
}
bucketStatsLocker.Unlock()
return bucketStatsCopy
filteredBucketStatsCopy := BucketStats{}
interval := InternalBucketThreshold
for _, bucket := range bucketStatsCopy {
if (bucket.BucketTime.After(startTime.Add(-1*interval/2).Round(interval)) && bucket.BucketTime.Before(endTime.Add(-1*interval/2).Round(interval))) ||
bucket.BucketTime.Equal(startTime.Add(-1*interval/2).Round(interval)) ||
bucket.BucketTime.Equal(endTime.Add(-1*interval/2).Round(interval)) {
filteredBucketStatsCopy = append(filteredBucketStatsCopy, bucket)
}
}
return filteredBucketStatsCopy
}
func getAggregatedResultTimingFromSpecificTime(intervalSeconds int, bucketStats BucketStats, firstBucketTime time.Time) (map[time.Time]map[string]map[string]*AccumulativeStatsCounter, map[string]string) {
protocolToColor := map[string]string{}
func getAggregatedResultTiming(stats BucketStats, interval time.Duration) map[time.Time]map[string]map[string]*AccumulativeStatsCounter {
methodsPerProtocolPerTimeAggregated := map[time.Time]map[string]map[string]*AccumulativeStatsCounter{}
bucketStatsIndex := len(bucketStats) - 1
bucketStatsIndex := len(stats) - 1
for bucketStatsIndex >= 0 {
currentBucketTime := bucketStats[bucketStatsIndex].BucketTime
if currentBucketTime.After(firstBucketTime) || currentBucketTime.Equal(firstBucketTime) {
resultBucketRoundedKey := currentBucketTime.Add(-1 * time.Second * time.Duration(intervalSeconds) / 2).Round(time.Second * time.Duration(intervalSeconds))
currentBucketTime := stats[bucketStatsIndex].BucketTime
resultBucketRoundedKey := currentBucketTime.Add(-1 * interval / 2).Round(interval)
for protocolName, data := range bucketStats[bucketStatsIndex].ProtocolStats {
if _, ok := protocolToColor[protocolName]; !ok {
protocolToColor[protocolName] = data.Color
for protocolName, data := range stats[bucketStatsIndex].ProtocolStats {
for methodName, dataOfMethod := range data.MethodsStats {
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey] = map[string]map[string]*AccumulativeStatsCounter{}
}
for methodName, dataOfMethod := range data.MethodsStats {
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey] = map[string]map[string]*AccumulativeStatsCounter{}
}
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName] = map[string]*AccumulativeStatsCounter{}
}
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName] = &AccumulativeStatsCounter{
Name: methodName,
EntriesCount: 0,
VolumeSizeBytes: 0,
}
}
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].EntriesCount += dataOfMethod.EntriesCount
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].VolumeSizeBytes += dataOfMethod.VolumeInBytes
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName] = map[string]*AccumulativeStatsCounter{}
}
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName] = &AccumulativeStatsCounter{
Name: methodName,
Color: getColorForMethod(protocolName, methodName),
EntriesCount: 0,
VolumeSizeBytes: 0,
}
}
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].EntriesCount += dataOfMethod.EntriesCount
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].VolumeSizeBytes += dataOfMethod.VolumeInBytes
}
}
bucketStatsIndex--
}
return methodsPerProtocolPerTimeAggregated, protocolToColor
return methodsPerProtocolPerTimeAggregated
}
func getAggregatedStatsAllTime(bucketStatsCopy BucketStats) (map[string]map[string]*AccumulativeStatsCounter, map[string]string) {
protocolToColor := make(map[string]string, 0)
func getAggregatedStats(stats BucketStats) map[string]map[string]*AccumulativeStatsCounter {
methodsPerProtocolAggregated := make(map[string]map[string]*AccumulativeStatsCounter, 0)
for _, countersOfTimeFrame := range bucketStatsCopy {
for _, countersOfTimeFrame := range stats {
for protocolName, value := range countersOfTimeFrame.ProtocolStats {
if _, ok := protocolToColor[protocolName]; !ok {
protocolToColor[protocolName] = value.Color
}
for method, countersValue := range value.MethodsStats {
if _, found := methodsPerProtocolAggregated[protocolName]; !found {
methodsPerProtocolAggregated[protocolName] = map[string]*AccumulativeStatsCounter{}
@@ -275,6 +330,7 @@ func getAggregatedStatsAllTime(bucketStatsCopy BucketStats) (map[string]map[stri
if _, found := methodsPerProtocolAggregated[protocolName][method]; !found {
methodsPerProtocolAggregated[protocolName][method] = &AccumulativeStatsCounter{
Name: method,
Color: getColorForMethod(protocolName, method),
EntriesCount: 0,
VolumeSizeBytes: 0,
}
@@ -284,5 +340,27 @@ func getAggregatedStatsAllTime(bucketStatsCopy BucketStats) (map[string]map[stri
}
}
}
return methodsPerProtocolAggregated, protocolToColor
return methodsPerProtocolAggregated
}
func getColorForMethod(protocolName string, methodName string) string {
hash := md5.Sum([]byte(fmt.Sprintf("%v_%v", protocolName, methodName)))
input := hex.EncodeToString(hash[:])
return fmt.Sprintf("#%v", input[:6])
}
func getAvailableProtocols(stats BucketStats) []string {
protocols := map[string]bool{}
for _, countersOfTimeFrame := range stats {
for protocolName := range countersOfTimeFrame.ProtocolStats {
protocols[protocolName] = true
}
}
result := make([]string, 0)
for protocol := range protocols {
result = append(result, protocol)
}
result = append(result, "ALL")
return result
}

View File

@@ -2,7 +2,6 @@ package providers
import (
"fmt"
"reflect"
"testing"
"time"
)
@@ -26,38 +25,6 @@ func TestGetBucketOfTimeStamp(t *testing.T) {
}
}
type DataForBucketBorderFunction struct {
EndTime time.Time
IntervalInSeconds int
NumberOfBars int
}
func TestGetBucketBorders(t *testing.T) {
tests := map[DataForBucketBorderFunction]time.Time{
DataForBucketBorderFunction{
time.Date(2022, time.Month(1), 1, 10, 34, 45, 0, time.UTC),
300,
10,
}: time.Date(2022, time.Month(1), 1, 9, 45, 0, 0, time.UTC),
DataForBucketBorderFunction{
time.Date(2022, time.Month(1), 1, 10, 35, 45, 0, time.UTC),
60,
5,
}: time.Date(2022, time.Month(1), 1, 10, 31, 00, 0, time.UTC),
}
for key, value := range tests {
t.Run(fmt.Sprintf("%v", key), func(t *testing.T) {
actual := getFirstBucketTime(key.EndTime, key.IntervalInSeconds, key.NumberOfBars)
if actual != value {
t.Errorf("unexpected result - expected: %v, actual: %v", value, actual)
}
})
}
}
func TestGetAggregatedStatsAllTime(t *testing.T) {
bucketStatsForTest := BucketStats{
&TimeFrameStatsValue{
@@ -140,10 +107,10 @@ func TestGetAggregatedStatsAllTime(t *testing.T) {
},
},
}
actual, _ := getAggregatedStatsAllTime(bucketStatsForTest)
actual := getAggregatedStats(bucketStatsForTest)
if !reflect.DeepEqual(actual, expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", 3, len(actual))
if len(actual) != len(expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", len(expected), len(actual))
}
}
@@ -227,10 +194,10 @@ func TestGetAggregatedStatsFromSpecificTime(t *testing.T) {
},
},
}
actual, _ := getAggregatedResultTimingFromSpecificTime(300, bucketStatsForTest, time.Date(2022, time.Month(1), 1, 10, 00, 00, 0, time.UTC))
actual := getAggregatedResultTiming(bucketStatsForTest, time.Minute*5)
if !reflect.DeepEqual(actual, expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", 3, len(actual))
if len(actual) != len(expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", len(expected), len(actual))
}
}
@@ -323,9 +290,9 @@ func TestGetAggregatedStatsFromSpecificTimeMultipleBuckets(t *testing.T) {
},
},
}
actual, _ := getAggregatedResultTimingFromSpecificTime(60, bucketStatsForTest, time.Date(2022, time.Month(1), 1, 10, 00, 00, 0, time.UTC))
actual := getAggregatedResultTiming(bucketStatsForTest, time.Minute)
if !reflect.DeepEqual(actual, expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", 3, len(actual))
if len(actual) != len(expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", len(expected), len(actual))
}
}

View File

@@ -26,7 +26,7 @@ func TestEntryAddedCount(t *testing.T) {
entryBucketKey := time.Date(2021, 1, 1, 10, 0, 0, 0, time.UTC)
valueLessThanBucketThreshold := time.Second * 130
mockSummery := &api.BaseEntry{Protocol: api.Protocol{Name: "mock"}, Method: "mock-method", Timestamp: entryBucketKey.Add(valueLessThanBucketThreshold).UnixNano()}
mockSummery := &api.BaseEntry{Protocol: api.Protocol{ProtocolSummary: api.ProtocolSummary{Name: "mock"}}, Method: "mock-method", Timestamp: entryBucketKey.Add(valueLessThanBucketThreshold).UnixNano()}
for _, entriesCount := range tests {
t.Run(fmt.Sprintf("%d", entriesCount), func(t *testing.T) {
for i := 0; i < entriesCount; i++ {
@@ -61,7 +61,7 @@ func TestEntryAddedVolume(t *testing.T) {
var expectedEntriesCount int
var expectedVolumeInGB float64
mockSummery := &api.BaseEntry{Protocol: api.Protocol{Name: "mock"}, Method: "mock-method", Timestamp: time.Date(2021, 1, 1, 10, 0, 0, 0, time.UTC).UnixNano()}
mockSummery := &api.BaseEntry{Protocol: api.Protocol{ProtocolSummary: api.ProtocolSummary{Name: "mock"}}, Method: "mock-method", Timestamp: time.Date(2021, 1, 1, 10, 0, 0, 0, time.UTC).UnixNano()}
for _, data := range tests {
t.Run(fmt.Sprintf("%d", len(data)), func(t *testing.T) {

View File

@@ -60,7 +60,7 @@ func GetTappedPodsStatus() []shared.TappedPodStatus {
func SetNodeToTappedPodMap(nodeToTappedPodsMap shared.NodeToPodsMap) {
summary := nodeToTappedPodsMap.Summary()
logger.Log.Infof("Setting node to tapped pods map to %v", summary)
logger.Log.Debugf("Setting node to tapped pods map to %v", summary)
nodeHostToTappedPodsMap = nodeToTappedPodsMap
}

184
agent/pkg/replay/replay.go Normal file
View File

@@ -0,0 +1,184 @@
package replay
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"strings"
"sync"
"time"
"github.com/google/uuid"
"github.com/up9inc/mizu/agent/pkg/app"
tapApi "github.com/up9inc/mizu/tap/api"
mizuhttp "github.com/up9inc/mizu/tap/extensions/http"
)
var (
inProcessRequestsLocker = sync.Mutex{}
inProcessRequests = 0
)
const maxParallelAction = 5
type Details struct {
Method string `json:"method"`
Url string `json:"url"`
Body string `json:"body"`
Headers map[string]string `json:"headers"`
}
type Response struct {
Success bool `json:"status"`
Data interface{} `json:"data"`
ErrorMessage string `json:"errorMessage"`
}
func incrementCounter() bool {
result := false
inProcessRequestsLocker.Lock()
if inProcessRequests < maxParallelAction {
inProcessRequests++
result = true
}
inProcessRequestsLocker.Unlock()
return result
}
func decrementCounter() {
inProcessRequestsLocker.Lock()
inProcessRequests--
inProcessRequestsLocker.Unlock()
}
func getEntryFromRequestResponse(extension *tapApi.Extension, request *http.Request, response *http.Response) *tapApi.Entry {
captureTime := time.Now()
itemTmp := tapApi.OutputChannelItem{
Protocol: *extension.Protocol,
ConnectionInfo: &tapApi.ConnectionInfo{
ClientIP: "",
ClientPort: "1",
ServerIP: "",
ServerPort: "1",
IsOutgoing: false,
},
Capture: "",
Timestamp: time.Now().UnixMilli(),
Pair: &tapApi.RequestResponsePair{
Request: tapApi.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
CaptureSize: 0,
Payload: &mizuhttp.HTTPPayload{
Type: mizuhttp.TypeHttpRequest,
Data: request,
},
},
Response: tapApi.GenericMessage{
IsRequest: false,
CaptureTime: captureTime,
CaptureSize: 0,
Payload: &mizuhttp.HTTPPayload{
Type: mizuhttp.TypeHttpResponse,
Data: response,
},
},
},
}
// Analyze is expecting an item that's marshalled and unmarshalled
itemMarshalled, err := json.Marshal(itemTmp)
if err != nil {
return nil
}
var finalItem *tapApi.OutputChannelItem
if err := json.Unmarshal(itemMarshalled, &finalItem); err != nil {
return nil
}
return extension.Dissector.Analyze(finalItem, "", "", "")
}
func ExecuteRequest(replayData *Details, timeout time.Duration) *Response {
if incrementCounter() {
defer decrementCounter()
client := &http.Client{
Timeout: timeout,
}
request, err := http.NewRequest(strings.ToUpper(replayData.Method), replayData.Url, bytes.NewBufferString(replayData.Body))
if err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
for headerKey, headerValue := range replayData.Headers {
request.Header.Add(headerKey, headerValue)
}
request.Header.Add("x-mizu", uuid.New().String())
response, requestErr := client.Do(request)
if requestErr != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: requestErr.Error(),
}
}
extension := app.ExtensionsMap["http"] // # TODO: maybe pass the extension to the function so it can be tested
entry := getEntryFromRequestResponse(extension, request, response)
base := extension.Dissector.Summarize(entry)
var representation []byte
// Represent is expecting an entry that's marshalled and unmarshalled
entryMarshalled, err := json.Marshal(entry)
if err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
var entryUnmarshalled *tapApi.Entry
if err := json.Unmarshal(entryMarshalled, &entryUnmarshalled); err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
representation, err = extension.Dissector.Represent(entryUnmarshalled.Request, entryUnmarshalled.Response)
if err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
return &Response{
Success: true,
Data: &tapApi.EntryWrapper{
Protocol: *extension.Protocol,
Representation: string(representation),
Data: entryUnmarshalled,
Base: base,
},
ErrorMessage: "",
}
} else {
return &Response{
Success: false,
Data: nil,
ErrorMessage: fmt.Sprintf("reached threshold of %d requests", maxParallelAction),
}
}
}

View File

@@ -0,0 +1,106 @@
package replay
import (
"bytes"
"fmt"
"net/http"
"strings"
"testing"
"time"
"encoding/json"
"github.com/google/uuid"
tapApi "github.com/up9inc/mizu/tap/api"
mizuhttp "github.com/up9inc/mizu/tap/extensions/http"
)
func TestValid(t *testing.T) {
client := &http.Client{
Timeout: 10 * time.Second,
}
tests := map[string]*Details{
"40x": {
Method: "GET",
Url: "http://httpbin.org/status/404",
Body: "",
Headers: map[string]string{},
},
"20x": {
Method: "GET",
Url: "http://httpbin.org/status/200",
Body: "",
Headers: map[string]string{},
},
"50x": {
Method: "GET",
Url: "http://httpbin.org/status/500",
Body: "",
Headers: map[string]string{},
},
// TODO: this should be fixes, currently not working because of header name with ":"
//":path-header": {
// Method: "GET",
// Url: "http://httpbin.org/get",
// Body: "",
// Headers: map[string]string{
// ":path": "/get",
// },
// },
}
for testCaseName, replayData := range tests {
t.Run(fmt.Sprintf("%+v", testCaseName), func(t *testing.T) {
request, err := http.NewRequest(strings.ToUpper(replayData.Method), replayData.Url, bytes.NewBufferString(replayData.Body))
if err != nil {
t.Errorf("Error executing request")
}
for headerKey, headerValue := range replayData.Headers {
request.Header.Add(headerKey, headerValue)
}
request.Header.Add("x-mizu", uuid.New().String())
response, requestErr := client.Do(request)
if requestErr != nil {
t.Errorf("failed: %v, ", requestErr)
}
extensionHttp := &tapApi.Extension{}
dissectorHttp := mizuhttp.NewDissector()
dissectorHttp.Register(extensionHttp)
extensionHttp.Dissector = dissectorHttp
extension := extensionHttp
entry := getEntryFromRequestResponse(extension, request, response)
base := extension.Dissector.Summarize(entry)
// Represent is expecting an entry that's marshalled and unmarshalled
entryMarshalled, err := json.Marshal(entry)
if err != nil {
t.Errorf("failed marshaling entry: %v, ", err)
}
var entryUnmarshalled *tapApi.Entry
if err := json.Unmarshal(entryMarshalled, &entryUnmarshalled); err != nil {
t.Errorf("failed unmarshaling entry: %v, ", err)
}
var representation []byte
representation, err = extension.Dissector.Represent(entryUnmarshalled.Request, entryUnmarshalled.Response)
if err != nil {
t.Errorf("failed: %v, ", err)
}
result := &tapApi.EntryWrapper{
Protocol: *extension.Protocol,
Representation: string(representation),
Data: entry,
Base: base,
}
t.Logf("%+v", result)
//data, _ := json.MarshalIndent(result, "", " ")
//t.Logf("%+v", string(data))
})
}
}

View File

@@ -0,0 +1,13 @@
package routes
import (
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/controllers"
)
// ReplayRoutes defines the group of replay routes.
func ReplayRoutes(app *gin.Engine) {
routeGroup := app.Group("/replay")
routeGroup.POST("/", controllers.ReplayRequest)
}

View File

@@ -15,9 +15,8 @@ func StatusRoutes(ginApp *gin.Engine) {
routeGroup.GET("/connectedTappersCount", controllers.GetConnectedTappersCount)
routeGroup.GET("/tap", controllers.GetTappingStatus)
routeGroup.GET("/general", controllers.GetGeneralStats) // get general stats about entries in DB
routeGroup.GET("/accumulative", controllers.GetAccumulativeStats)
routeGroup.GET("/accumulativeTiming", controllers.GetAccumulativeStatsTiming)
routeGroup.GET("/general", controllers.GetGeneralStats)
routeGroup.GET("/trafficStats", controllers.GetTrafficStats)
routeGroup.GET("/resolving", controllers.GetCurrentResolvingInformation)
}

View File

@@ -1,124 +0,0 @@
package rules
import (
"encoding/base64"
"encoding/json"
"fmt"
"reflect"
"regexp"
"strings"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/shared"
"github.com/yalp/jsonpath"
)
type RulesMatched struct {
Matched bool `json:"matched"`
Rule shared.RulePolicy `json:"rule"`
}
func appendRulesMatched(rulesMatched []RulesMatched, matched bool, rule shared.RulePolicy) []RulesMatched {
return append(rulesMatched, RulesMatched{Matched: matched, Rule: rule})
}
func ValidatePath(URLFromRule string, URL string) bool {
if URLFromRule != "" {
matchPath, err := regexp.MatchString(URLFromRule, URL)
if err != nil || !matchPath {
return false
}
}
return true
}
func ValidateService(serviceFromRule string, service string) bool {
if serviceFromRule != "" {
matchService, err := regexp.MatchString(serviceFromRule, service)
if err != nil || !matchService {
return false
}
}
return true
}
func MatchRequestPolicy(harEntry har.Entry, service string) (resultPolicyToSend []RulesMatched, isEnabled bool) {
enforcePolicy, err := shared.DecodeEnforcePolicy(fmt.Sprintf("%s%s", shared.ConfigDirPath, shared.ValidationRulesFileName))
if err == nil && len(enforcePolicy.Rules) > 0 {
isEnabled = true
}
for _, rule := range enforcePolicy.Rules {
if !ValidatePath(rule.Path, harEntry.Request.URL) || !ValidateService(rule.Service, service) {
continue
}
if rule.Type == "json" {
var bodyJsonMap interface{}
contentTextDecoded, _ := base64.StdEncoding.DecodeString(harEntry.Response.Content.Text)
if err := json.Unmarshal(contentTextDecoded, &bodyJsonMap); err != nil {
continue
}
out, err := jsonpath.Read(bodyJsonMap, rule.Key)
if err != nil || out == nil {
continue
}
var matchValue bool
if reflect.TypeOf(out).Kind() == reflect.String {
matchValue, err = regexp.MatchString(rule.Value, out.(string))
if err != nil {
continue
}
logger.Log.Info(matchValue, rule.Value)
} else {
val := fmt.Sprint(out)
matchValue, err = regexp.MatchString(rule.Value, val)
if err != nil {
continue
}
}
resultPolicyToSend = appendRulesMatched(resultPolicyToSend, matchValue, rule)
} else if rule.Type == "header" {
for j := range harEntry.Response.Headers {
matchKey, err := regexp.MatchString(rule.Key, harEntry.Response.Headers[j].Name)
if err != nil {
continue
}
if matchKey {
matchValue, err := regexp.MatchString(rule.Value, harEntry.Response.Headers[j].Value)
if err != nil {
continue
}
resultPolicyToSend = appendRulesMatched(resultPolicyToSend, matchValue, rule)
}
}
} else {
resultPolicyToSend = appendRulesMatched(resultPolicyToSend, true, rule)
}
}
return
}
func PassedValidationRules(rulesMatched []RulesMatched) (bool, int64, int) {
var numberOfRulesMatched = len(rulesMatched)
var responseTime int64 = -1
if numberOfRulesMatched == 0 {
return false, 0, numberOfRulesMatched
}
for _, rule := range rulesMatched {
if !rule.Matched {
return false, responseTime, numberOfRulesMatched
} else {
if strings.ToLower(rule.Rule.Type) == "slo" {
if rule.Rule.ResponseTime < responseTime || responseTime == -1 {
responseTime = rule.Rule.ResponseTime
}
}
}
}
return true, responseTime, numberOfRulesMatched
}

View File

@@ -50,11 +50,13 @@ var (
IP: fmt.Sprintf("%s.%s", Ip, UnresolvedNodeName),
}
ProtocolHttp = &tapApi.Protocol{
Name: "http",
ProtocolSummary: tapApi.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -63,11 +65,13 @@ var (
Priority: 0,
}
ProtocolRedis = &tapApi.Protocol{
Name: "redis",
ProtocolSummary: tapApi.ProtocolSummary{
Name: "redis",
Version: "3.x",
Abbreviation: "REDIS",
},
LongName: "Redis Serialization Protocol",
Abbreviation: "REDIS",
Macro: "redis",
Version: "3.x",
BackgroundColor: "#a41e11",
ForegroundColor: "#ffffff",
FontSize: 11,

View File

@@ -4,9 +4,7 @@ import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"time"
"github.com/up9inc/mizu/cli/utils"
@@ -93,45 +91,3 @@ func (provider *Provider) ReportTappedPods(pods []core.Pod) error {
}
}
}
func (provider *Provider) GetGeneralStats() (map[string]interface{}, error) {
generalStatsUrl := fmt.Sprintf("%s/status/general", provider.url)
response, requestErr := utils.Get(generalStatsUrl, provider.client)
if requestErr != nil {
return nil, fmt.Errorf("failed to get general stats for telemetry, err: %w", requestErr)
}
defer response.Body.Close()
data, readErr := ioutil.ReadAll(response.Body)
if readErr != nil {
return nil, fmt.Errorf("failed to read general stats for telemetry, err: %v", readErr)
}
var generalStats map[string]interface{}
if parseErr := json.Unmarshal(data, &generalStats); parseErr != nil {
return nil, fmt.Errorf("failed to parse general stats for telemetry, err: %v", parseErr)
}
return generalStats, nil
}
func (provider *Provider) GetVersion() (string, error) {
versionUrl, _ := url.Parse(fmt.Sprintf("%s/metadata/version", provider.url))
req := &http.Request{
Method: http.MethodGet,
URL: versionUrl,
}
statusResp, err := utils.Do(req, provider.client)
if err != nil {
return "", err
}
defer statusResp.Body.Close()
versionResponse := &shared.VersionResponse{}
if err := json.NewDecoder(statusResp.Body).Decode(&versionResponse); err != nil {
return "", err
}
return versionResponse.Ver, nil
}

View File

@@ -4,7 +4,6 @@ import (
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/logger"
)
@@ -12,7 +11,6 @@ var checkCmd = &cobra.Command{
Use: "check",
Short: "Check the Mizu installation for potential problems",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("check", nil)
runMizuCheck()
return nil
},

View File

@@ -2,14 +2,12 @@ package cmd
import (
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/telemetry"
)
var cleanCmd = &cobra.Command{
Use: "clean",
Short: "Removes all mizu resources",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("clean", nil)
performCleanCommand()
return nil
},

View File

@@ -7,7 +7,6 @@ import (
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/logger"
)
@@ -16,8 +15,6 @@ var configCmd = &cobra.Command{
Use: "config",
Short: "Generate config with default values",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("config", config.Config.Config)
configWithDefaults, err := config.GetConfigWithDefaults()
if err != nil {
logger.Log.Errorf("Failed generating config with defaults, err: %v", err)

View File

@@ -4,7 +4,6 @@ import (
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/logger"
)
@@ -12,7 +11,6 @@ var installCmd = &cobra.Command{
Use: "install",
Short: "Installs mizu components",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("install", nil)
runMizuInstall()
return nil
},

View File

@@ -9,7 +9,6 @@ import (
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/logger"
)
@@ -17,8 +16,6 @@ var logsCmd = &cobra.Command{
Use: "logs",
Short: "Create a zip file with logs for Github issue or troubleshoot",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("logs", config.Config.Logs)
kubernetesProvider, err := getKubernetesProviderForCli()
if err != nil {
return nil

View File

@@ -48,13 +48,12 @@ func init() {
tapCmd.Flags().Uint16P(configStructs.GuiPortTapName, "p", defaultTapConfig.GuiPort, "Provide a custom port for the web interface webserver")
tapCmd.Flags().StringSliceP(configStructs.NamespacesTapName, "n", defaultTapConfig.Namespaces, "Namespaces selector")
tapCmd.Flags().BoolP(configStructs.AllNamespacesTapName, "A", defaultTapConfig.AllNamespaces, "Tap all namespaces")
tapCmd.Flags().StringSliceP(configStructs.PlainTextFilterRegexesTapName, "r", defaultTapConfig.PlainTextFilterRegexes, "List of regex expressions that are used to filter matching values from text/plain http bodies")
tapCmd.Flags().Bool(configStructs.EnableRedactionTapName, defaultTapConfig.EnableRedaction, "Enables redaction of potentially sensitive request/response headers and body values")
tapCmd.Flags().String(configStructs.HumanMaxEntriesDBSizeTapName, defaultTapConfig.HumanMaxEntriesDBSize, "Override the default max entries db size")
tapCmd.Flags().String(configStructs.InsertionFilterName, defaultTapConfig.InsertionFilter, "Set the insertion filter. Accepts string or a file path.")
tapCmd.Flags().Bool(configStructs.DryRunTapName, defaultTapConfig.DryRun, "Preview of all pods matching the regex, without tapping them")
tapCmd.Flags().String(configStructs.EnforcePolicyFile, defaultTapConfig.EnforcePolicyFile, "Yaml file path with policy rules")
tapCmd.Flags().Bool(configStructs.ServiceMeshName, defaultTapConfig.ServiceMesh, "Record decrypted traffic if the cluster is configured with a service mesh and with mtls")
tapCmd.Flags().Bool(configStructs.TlsName, defaultTapConfig.Tls, "Record tls traffic")
tapCmd.Flags().Bool(configStructs.ProfilerName, defaultTapConfig.Profiler, "Run pprof server")
tapCmd.Flags().Int(configStructs.MaxLiveStreamsName, defaultTapConfig.MaxLiveStreams, "Maximum live tcp streams to handle concurrently")
}

View File

@@ -9,10 +9,8 @@ import (
"time"
"github.com/up9inc/mizu/cli/resources"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/cli/utils"
"gopkg.in/yaml.v3"
core "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -45,16 +43,6 @@ func RunMizuTap() {
apiProvider = apiserver.NewProvider(GetApiServerUrl(config.Config.Tap.GuiPort), apiserver.DefaultRetries, apiserver.DefaultTimeout)
var err error
var serializedValidationRules string
if config.Config.Tap.EnforcePolicyFile != "" {
serializedValidationRules, err = readValidationRules(config.Config.Tap.EnforcePolicyFile)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error reading policy file: %v", errormessage.FormatError(err)))
return
}
}
kubernetesProvider, err := getKubernetesProviderForCli()
if err != nil {
return
@@ -98,7 +86,7 @@ func RunMizuTap() {
}
logger.Log.Infof("Waiting for Mizu Agent to start...")
if state.mizuServiceAccountExists, err = resources.CreateTapMizuResources(ctx, kubernetesProvider, serializedValidationRules, serializedMizuConfig, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace, config.Config.AgentImage, config.Config.Tap.MaxEntriesDBSizeBytes(), config.Config.Tap.ApiServerResources, config.Config.ImagePullPolicy(), config.Config.LogLevel(), config.Config.Tap.Profiler); err != nil {
if state.mizuServiceAccountExists, err = resources.CreateTapMizuResources(ctx, kubernetesProvider, serializedMizuConfig, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace, config.Config.AgentImage, config.Config.Tap.MaxEntriesDBSizeBytes(), config.Config.Tap.ApiServerResources, config.Config.ImagePullPolicy(), config.Config.LogLevel(), config.Config.Tap.Profiler); err != nil {
var statusError *k8serrors.StatusError
if errors.As(err, &statusError) && (statusError.ErrStatus.Reason == metav1.StatusReasonAlreadyExists) {
logger.Log.Info("Mizu is already running in this namespace, change the `mizu-resources-namespace` configuration or run `mizu clean` to remove the currently running Mizu instance")
@@ -120,8 +108,6 @@ func RunMizuTap() {
}
func finishTapExecution(kubernetesProvider *kubernetes.Provider) {
telemetry.ReportTapTelemetry(apiProvider, config.Config.Tap, state.startTime)
finishMizuExecution(kubernetesProvider, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace)
}
@@ -137,7 +123,6 @@ func getTapMizuAgentConfig() *shared.MizuAgentConfig {
AgentDatabasePath: shared.DataDirPath,
ServiceMap: config.Config.ServiceMap,
OAS: config.Config.OAS,
Telemetry: config.Config.Telemetry,
}
return &mizuAgentConfig
@@ -162,20 +147,22 @@ func printTappedPodsPreview(ctx context.Context, kubernetesProvider *kubernetes.
}
}
func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider *kubernetes.Provider, targetNamespaces []string, mizuApiFilteringOptions api.TrafficFilteringOptions, startTime time.Time) error {
func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider *kubernetes.Provider, targetNamespaces []string, startTime time.Time) error {
tapperSyncer, err := kubernetes.CreateAndStartMizuTapperSyncer(ctx, provider, kubernetes.TapperSyncerConfig{
TargetNamespaces: targetNamespaces,
PodFilterRegex: *config.Config.Tap.PodRegex(),
MizuResourcesNamespace: config.Config.MizuResourcesNamespace,
AgentImage: config.Config.AgentImage,
TapperResources: config.Config.Tap.TapperResources,
ImagePullPolicy: config.Config.ImagePullPolicy(),
LogLevel: config.Config.LogLevel(),
IgnoredUserAgents: config.Config.Tap.IgnoredUserAgents,
MizuApiFilteringOptions: mizuApiFilteringOptions,
TargetNamespaces: targetNamespaces,
PodFilterRegex: *config.Config.Tap.PodRegex(),
MizuResourcesNamespace: config.Config.MizuResourcesNamespace,
AgentImage: config.Config.AgentImage,
TapperResources: config.Config.Tap.TapperResources,
ImagePullPolicy: config.Config.ImagePullPolicy(),
LogLevel: config.Config.LogLevel(),
MizuApiFilteringOptions: api.TrafficFilteringOptions{
IgnoredUserAgents: config.Config.Tap.IgnoredUserAgents,
},
MizuServiceAccountExists: state.mizuServiceAccountExists,
ServiceMesh: config.Config.Tap.ServiceMesh,
Tls: config.Config.Tap.Tls,
MaxLiveStreams: config.Config.Tap.MaxLiveStreams,
}, startTime)
if err != nil {
@@ -239,36 +226,6 @@ func getErrorDisplayTextForK8sTapManagerError(err kubernetes.K8sTapManagerError)
}
}
func readValidationRules(file string) (string, error) {
rules, err := shared.DecodeEnforcePolicy(file)
if err != nil {
return "", err
}
newContent, _ := yaml.Marshal(&rules)
return string(newContent), nil
}
func getMizuApiFilteringOptions() (*api.TrafficFilteringOptions, error) {
var compiledRegexSlice []*api.SerializableRegexp
if config.Config.Tap.PlainTextFilterRegexes != nil && len(config.Config.Tap.PlainTextFilterRegexes) > 0 {
compiledRegexSlice = make([]*api.SerializableRegexp, 0)
for _, regexStr := range config.Config.Tap.PlainTextFilterRegexes {
compiledRegex, err := api.CompileRegexToSerializableRegexp(regexStr)
if err != nil {
return nil, err
}
compiledRegexSlice = append(compiledRegexSlice, compiledRegex)
}
}
return &api.TrafficFilteringOptions{
PlainTextMaskingRegexes: compiledRegexSlice,
IgnoredUserAgents: config.Config.Tap.IgnoredUserAgents,
EnableRedaction: config.Config.Tap.EnableRedaction,
}, nil
}
func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s$", kubernetes.ApiServerPodName))
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
@@ -386,8 +343,7 @@ func watchApiServerEvents(ctx context.Context, kubernetesProvider *kubernetes.Pr
func postApiServerStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel, config.Config.Tap.GuiPort)
options, _ := getMizuApiFilteringOptions()
if err := startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, *options, state.startTime); err != nil {
if err := startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, state.startTime); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error starting mizu tapper syncer: %v", errormessage.FormatError(err)))
cancel()
}

View File

@@ -6,7 +6,6 @@ import (
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/logger"
"github.com/creasty/defaults"
@@ -18,8 +17,6 @@ var versionCmd = &cobra.Command{
Use: "version",
Short: "Print version info",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("version", config.Config.Version)
if config.Config.Version.DebugInfo {
timeStampInt, _ := strconv.ParseInt(mizu.BuildTimestamp, 10, 0)
logger.Log.Infof("Version: %s \nBranch: %s (%s)", mizu.Ver, mizu.Branch, mizu.GitCommitHash)

View File

@@ -3,9 +3,7 @@ package cmd
import (
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/logger"
)
@@ -13,7 +11,6 @@ var viewCmd = &cobra.Command{
Use: "view",
Short: "Open GUI in browser",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("view", config.Config.View)
runMizuView()
return nil
},

View File

@@ -31,7 +31,6 @@ type ConfigStruct struct {
AgentImage string `yaml:"agent-image,omitempty" readonly:""`
ImagePullPolicyStr string `yaml:"image-pull-policy" default:"Always"`
MizuResourcesNamespace string `yaml:"mizu-resources-namespace" default:"mizu"`
Telemetry bool `yaml:"telemetry" default:"true"`
DumpLogs bool `yaml:"dump-logs" default:"false"`
KubeConfigPathStr string `yaml:"kube-config-path"`
KubeContext string `yaml:"kube-context"`

View File

@@ -6,6 +6,7 @@ import (
"io/ioutil"
"os"
"regexp"
"strings"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
@@ -15,38 +16,43 @@ import (
)
const (
GuiPortTapName = "gui-port"
NamespacesTapName = "namespaces"
AllNamespacesTapName = "all-namespaces"
PlainTextFilterRegexesTapName = "regex-masking"
EnableRedactionTapName = "redact"
HumanMaxEntriesDBSizeTapName = "max-entries-db-size"
InsertionFilterName = "insertion-filter"
DryRunTapName = "dry-run"
EnforcePolicyFile = "traffic-validation-file"
ServiceMeshName = "service-mesh"
TlsName = "tls"
ProfilerName = "profiler"
GuiPortTapName = "gui-port"
NamespacesTapName = "namespaces"
AllNamespacesTapName = "all-namespaces"
EnableRedactionTapName = "redact"
HumanMaxEntriesDBSizeTapName = "max-entries-db-size"
InsertionFilterName = "insertion-filter"
DryRunTapName = "dry-run"
ServiceMeshName = "service-mesh"
TlsName = "tls"
ProfilerName = "profiler"
MaxLiveStreamsName = "max-live-streams"
)
type TapConfig struct {
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
ProxyHost string `yaml:"proxy-host" default:"127.0.0.1"`
Namespaces []string `yaml:"namespaces"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
PlainTextFilterRegexes []string `yaml:"regex-masking"`
IgnoredUserAgents []string `yaml:"ignored-user-agents"`
EnableRedaction bool `yaml:"redact" default:"false"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
InsertionFilter string `yaml:"insertion-filter" default:""`
DryRun bool `yaml:"dry-run" default:"false"`
EnforcePolicyFile string `yaml:"traffic-validation-file"`
ApiServerResources shared.Resources `yaml:"api-server-resources"`
TapperResources shared.Resources `yaml:"tapper-resources"`
ServiceMesh bool `yaml:"service-mesh" default:"false"`
Tls bool `yaml:"tls" default:"false"`
Profiler bool `yaml:"profiler" default:"false"`
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
ProxyHost string `yaml:"proxy-host" default:"127.0.0.1"`
Namespaces []string `yaml:"namespaces"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
IgnoredUserAgents []string `yaml:"ignored-user-agents"`
EnableRedaction bool `yaml:"redact" default:"false"`
RedactPatterns struct {
RequestHeaders []string `yaml:"request-headers"`
ResponseHeaders []string `yaml:"response-headers"`
RequestBody []string `yaml:"request-body"`
ResponseBody []string `yaml:"response-body"`
RequestQueryParams []string `yaml:"request-query-params"`
} `yaml:"redact-patterns"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
InsertionFilter string `yaml:"insertion-filter" default:""`
DryRun bool `yaml:"dry-run" default:"false"`
ApiServerResources shared.Resources `yaml:"api-server-resources"`
TapperResources shared.Resources `yaml:"tapper-resources"`
ServiceMesh bool `yaml:"service-mesh" default:"false"`
Tls bool `yaml:"tls" default:"false"`
Profiler bool `yaml:"profiler" default:"false"`
MaxLiveStreams int `yaml:"max-live-streams" default:"500"`
}
func (config *TapConfig) PodRegex() *regexp.Regexp {
@@ -71,9 +77,48 @@ func (config *TapConfig) GetInsertionFilter() string {
}
}
}
redactFilter := getRedactFilter(config)
if insertionFilter != "" && redactFilter != "" {
return fmt.Sprintf("(%s) and (%s)", insertionFilter, redactFilter)
} else if insertionFilter == "" && redactFilter != "" {
return redactFilter
}
return insertionFilter
}
func getRedactFilter(config *TapConfig) string {
if !config.EnableRedaction {
return ""
}
var redactValues []string
for _, requestHeader := range config.RedactPatterns.RequestHeaders {
redactValues = append(redactValues, fmt.Sprintf("request.headers['%s']", requestHeader))
}
for _, responseHeader := range config.RedactPatterns.ResponseHeaders {
redactValues = append(redactValues, fmt.Sprintf("response.headers['%s']", responseHeader))
}
for _, requestBody := range config.RedactPatterns.RequestBody {
redactValues = append(redactValues, fmt.Sprintf("request.postData.text.json()...%s", requestBody))
}
for _, responseBody := range config.RedactPatterns.ResponseBody {
redactValues = append(redactValues, fmt.Sprintf("response.content.text.json()...%s", responseBody))
}
for _, requestQueryParams := range config.RedactPatterns.RequestQueryParams {
redactValues = append(redactValues, fmt.Sprintf("request.queryString['%s']", requestQueryParams))
}
if len(redactValues) == 0 {
return ""
}
return fmt.Sprintf("redact(\"%s\")", strings.Join(redactValues, "\",\""))
}
func (config *TapConfig) Validate() error {
_, compileErr := regexp.Compile(config.PodRegexStr)
if compileErr != nil {

View File

@@ -4,7 +4,6 @@ go 1.17
require (
github.com/creasty/defaults v1.5.2
github.com/denisbrodbeck/machineid v1.0.1
github.com/google/go-github/v37 v37.0.0
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
github.com/spf13/cobra v1.3.0

View File

@@ -145,8 +145,6 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
github.com/denisbrodbeck/machineid v1.0.1 h1:geKr9qtkB876mXguW2X6TU4ZynleN6ezuMSRhl4D7AQ=
github.com/denisbrodbeck/machineid v1.0.1/go.mod h1:dJUwb7PTidGDeYyUBmXZ2GphQBbjJCrnectwCyxcUSI=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=

View File

@@ -14,8 +14,6 @@ var (
Platform = ""
)
const DEVENVVAR = "MIZU_DISABLE_TELEMTRY"
func GetMizuFolderPath() string {
home, homeDirErr := os.UserHomeDir()
if homeDirErr != nil {

View File

@@ -5,7 +5,6 @@ import (
"fmt"
"io/ioutil"
"net/http"
"os"
"runtime"
"strings"
"time"
@@ -18,10 +17,6 @@ import (
)
func CheckNewerVersion(versionChan chan string) {
if _, present := os.LookupEnv(mizu.DEVENVVAR); present {
versionChan <- ""
return
}
logger.Log.Debugf("Checking for newer version...")
start := time.Now()
client := github.NewClient(nil)

View File

@@ -14,14 +14,14 @@ import (
core "k8s.io/api/core/v1"
)
func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level, profiler bool) (bool, error) {
func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level, profiler bool) (bool, error) {
if !isNsRestrictedMode {
if err := createMizuNamespace(ctx, kubernetesProvider, mizuResourcesNamespace); err != nil {
return false, err
}
}
if err := createMizuConfigmap(ctx, kubernetesProvider, serializedValidationRules, serializedMizuConfig, mizuResourcesNamespace); err != nil {
if err := createMizuConfigmap(ctx, kubernetesProvider, serializedMizuConfig, mizuResourcesNamespace); err != nil {
return false, err
}
@@ -71,8 +71,8 @@ func createMizuNamespace(ctx context.Context, kubernetesProvider *kubernetes.Pro
return err
}
func createMizuConfigmap(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedMizuConfig string, mizuResourcesNamespace string) error {
err := kubernetesProvider.CreateConfigMap(ctx, mizuResourcesNamespace, kubernetes.ConfigMapName, serializedValidationRules, serializedMizuConfig)
func createMizuConfigmap(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedMizuConfig string, mizuResourcesNamespace string) error {
err := kubernetesProvider.CreateConfigMap(ctx, mizuResourcesNamespace, kubernetes.ConfigMapName, serializedMizuConfig)
return err
}

View File

@@ -1,97 +0,0 @@
package telemetry
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"os"
"time"
"github.com/denisbrodbeck/machineid"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/logger"
)
const telemetryUrl = "https://us-east4-up9-prod.cloudfunctions.net/mizu-telemetry"
func ReportRun(cmd string, args interface{}) {
if !shouldRunTelemetry() {
logger.Log.Debug("not reporting telemetry")
return
}
argsBytes, _ := json.Marshal(args)
argsMap := map[string]interface{}{
"cmd": cmd,
"args": string(argsBytes),
}
if err := sendTelemetry(argsMap); err != nil {
logger.Log.Debug(err)
return
}
logger.Log.Debugf("successfully reported telemetry for cmd %v", cmd)
}
func ReportTapTelemetry(apiProvider *apiserver.Provider, args interface{}, startTime time.Time) {
if !shouldRunTelemetry() {
logger.Log.Debug("not reporting telemetry")
return
}
generalStats, err := apiProvider.GetGeneralStats()
if err != nil {
logger.Log.Debugf("[ERROR] failed to get general stats from api server %v", err)
return
}
argsBytes, _ := json.Marshal(args)
argsMap := map[string]interface{}{
"cmd": "tap",
"args": string(argsBytes),
"executionTimeInSeconds": int(time.Since(startTime).Seconds()),
"apiCallsCount": generalStats["EntriesCount"],
"trafficVolumeInGB": generalStats["EntriesVolumeInGB"],
}
if err := sendTelemetry(argsMap); err != nil {
logger.Log.Debug(err)
return
}
logger.Log.Debug("successfully reported telemetry of tap command")
}
func shouldRunTelemetry() bool {
if _, present := os.LookupEnv(mizu.DEVENVVAR); present {
return false
}
if !config.Config.Telemetry {
return false
}
return mizu.Branch == "main" || mizu.Branch == "develop"
}
func sendTelemetry(argsMap map[string]interface{}) error {
argsMap["component"] = "mizu_cli"
argsMap["buildTimestamp"] = mizu.BuildTimestamp
argsMap["branch"] = mizu.Branch
argsMap["version"] = mizu.Ver
argsMap["platform"] = mizu.Platform
if machineId, err := machineid.ProtectedID("mizu"); err == nil {
argsMap["machineId"] = machineId
}
jsonValue, _ := json.Marshal(argsMap)
if resp, err := http.Post(telemetryUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
return fmt.Errorf("ERROR: failed sending telemetry, err: %v, response %v", err, resp)
}
return nil
}

View File

@@ -57,7 +57,7 @@ log "Writing output to $MIZU_BENCHMARK_OUTPUT_DIR"
cd $MIZU_HOME || exit 1
export HOST_MODE=0
export SENSITIVE_DATA_FILTERING_OPTIONS='{"EnableRedaction": false}'
export SENSITIVE_DATA_FILTERING_OPTIONS='{}'
export MIZU_DEBUG_DISABLE_PCAP=false
export MIZU_DEBUG_DISABLE_TCP_REASSEMBLY=false
export MIZU_DEBUG_DISABLE_TCP_STREAM=false

View File

@@ -6,7 +6,6 @@ const (
NodeNameEnvVar = "NODE_NAME"
ConfigDirPath = "/app/config/"
DataDirPath = "/app/data/"
ValidationRulesFileName = "validation-rules.yaml"
ConfigFileName = "mizu-config.json"
DefaultApiServerPort = 8899
LogLevelEnvVar = "LOG_LEVEL"

View File

@@ -4,11 +4,9 @@ go 1.17
require (
github.com/docker/go-units v0.4.0
github.com/golang-jwt/jwt/v4 v4.2.0
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
github.com/up9inc/mizu/logger v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
k8s.io/api v0.23.3
k8s.io/apimachinery v0.23.3
k8s.io/client-go v0.23.3
@@ -38,11 +36,11 @@ require (
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/go-cmp v0.5.7 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/martian v2.1.0+incompatible // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
@@ -81,6 +79,7 @@ require (
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/cli-runtime v0.23.3 // indirect
k8s.io/component-base v0.23.3 // indirect
k8s.io/klog/v2 v2.40.1 // indirect

View File

@@ -282,7 +282,6 @@ github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=

View File

@@ -43,11 +43,11 @@ type TapperSyncerConfig struct {
TapperResources shared.Resources
ImagePullPolicy core.PullPolicy
LogLevel logging.Level
IgnoredUserAgents []string
MizuApiFilteringOptions api.TrafficFilteringOptions
MizuServiceAccountExists bool
ServiceMesh bool
Tls bool
MaxLiveStreams int
}
func CreateAndStartMizuTapperSyncer(ctx context.Context, kubernetesProvider *Provider, config TapperSyncerConfig, startTime time.Time) (*MizuTapperSyncer, error) {
@@ -337,7 +337,8 @@ func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
tapperSyncer.config.MizuApiFilteringOptions,
tapperSyncer.config.LogLevel,
tapperSyncer.config.ServiceMesh,
tapperSyncer.config.Tls); err != nil {
tapperSyncer.config.Tls,
tapperSyncer.config.MaxLiveStreams); err != nil {
return err
}

View File

@@ -10,6 +10,7 @@ import (
"net/url"
"path/filepath"
"regexp"
"strconv"
"github.com/op/go-logging"
"github.com/up9inc/mizu/logger"
@@ -382,11 +383,11 @@ func (provider *Provider) GetMizuApiServerPodObject(opts *ApiServerOptions, moun
Tolerations: []core.Toleration{
{
Operator: core.TolerationOpExists,
Effect: core.TaintEffectNoExecute,
Effect: core.TaintEffectNoExecute,
},
{
Operator: core.TolerationOpExists,
Effect: core.TaintEffectNoSchedule,
Effect: core.TaintEffectNoSchedule,
},
},
},
@@ -684,11 +685,8 @@ func (provider *Provider) handleRemovalError(err error) error {
return err
}
func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string, configMapName string, serializedValidationRules string, serializedMizuConfig string) error {
func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string, configMapName string, serializedMizuConfig string) error {
configMapData := make(map[string]string)
if serializedValidationRules != "" {
configMapData[shared.ValidationRulesFileName] = serializedValidationRules
}
configMapData[shared.ConfigFileName] = serializedMizuConfig
configMap := &core.ConfigMap{
@@ -711,7 +709,7 @@ func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string,
return nil
}
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeNames []string, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level, serviceMesh bool, tls bool) error {
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeNames []string, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level, serviceMesh bool, tls bool, maxLiveStreams int) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeNames), namespace, daemonSetName, podImage, tapperPodName)
if len(nodeNames) == 0 {
@@ -729,6 +727,7 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
"--tap",
"--api-server-address", fmt.Sprintf("ws://%s/wsTapper", apiServerPodIp),
"--nodefrag",
"--max-live-streams", strconv.Itoa(maxLiveStreams),
}
if serviceMesh {

View File

@@ -1,13 +1,8 @@
package shared
import (
"io/ioutil"
"strings"
"github.com/op/go-logging"
"github.com/up9inc/mizu/logger"
"gopkg.in/yaml.v3"
v1 "k8s.io/api/core/v1"
)
@@ -48,14 +43,12 @@ type MizuAgentConfig struct {
AgentDatabasePath string `json:"agentDatabasePath"`
ServiceMap bool `json:"serviceMap"`
OAS OASConfig `json:"oas"`
Telemetry bool `json:"telemetry"`
}
type WebSocketMessageMetadata struct {
MessageType WebSocketMessageType `json:"messageType,omitempty"`
}
type WebSocketStatusMessage struct {
*WebSocketMessageMetadata
TappingStatus []TappedPodStatus `json:"tappingStatus"`
@@ -136,83 +129,3 @@ type HealthResponse struct {
type VersionResponse struct {
Ver string `json:"ver"`
}
type RulesPolicy struct {
Rules []RulePolicy `yaml:"rules"`
}
type RulePolicy struct {
Type string `yaml:"type"`
Service string `yaml:"service"`
Path string `yaml:"path"`
Method string `yaml:"method"`
Key string `yaml:"key"`
Value string `yaml:"value"`
ResponseTime int64 `yaml:"response-time"`
Name string `yaml:"name"`
}
type RulesMatched struct {
Matched bool `json:"matched"`
Rule RulePolicy `json:"rule"`
}
func (r *RulePolicy) validateType() bool {
permitedTypes := []string{"json", "header", "slo"}
_, found := Find(permitedTypes, r.Type)
if !found {
logger.Log.Errorf("Only json, header and slo types are supported on rule definition. This rule will be ignored. rule name: %s", r.Name)
found = false
}
if strings.ToLower(r.Type) == "slo" {
if r.ResponseTime <= 0 {
logger.Log.Errorf("When rule type is slo, the field response-time should be specified and have a value >= 1. rule name: %s", r.Name)
found = false
}
}
return found
}
func (rules *RulesPolicy) ValidateRulesPolicy() []int {
invalidIndex := make([]int, 0)
for i := range rules.Rules {
validated := rules.Rules[i].validateType()
if !validated {
invalidIndex = append(invalidIndex, i)
}
}
return invalidIndex
}
func Find(slice []string, val string) (int, bool) {
for i, item := range slice {
if item == val {
return i, true
}
}
return -1, false
}
func DecodeEnforcePolicy(path string) (RulesPolicy, error) {
content, err := ioutil.ReadFile(path)
enforcePolicy := RulesPolicy{}
if err != nil {
return enforcePolicy, err
}
err = yaml.Unmarshal(content, &enforcePolicy)
if err != nil {
return enforcePolicy, err
}
invalidIndex := enforcePolicy.ValidateRulesPolicy()
var k = 0
if len(invalidIndex) != 0 {
for i, rule := range enforcePolicy.Rules {
if !ContainsInt(invalidIndex, i) {
enforcePolicy.Rules[k] = rule
k++
}
}
enforcePolicy.Rules = enforcePolicy.Rules[:k]
}
return enforcePolicy, nil
}

View File

@@ -10,15 +10,6 @@ func Contains(slice []string, containsValue string) bool {
return false
}
func ContainsInt(slice []int, containsValue int) bool {
for _, sliceValue := range slice {
if sliceValue == containsValue {
return true
}
}
return false
}
func Unique(slice []string) []string {
keys := make(map[string]bool)
var list []string

View File

@@ -2,7 +2,9 @@ package api
import (
"bufio"
"fmt"
"net"
"strings"
"sync"
"time"
@@ -14,12 +16,29 @@ const UnknownNamespace = ""
var UnknownIp = net.IP{0, 0, 0, 0}
var UnknownPort uint16 = 0
type ProtocolSummary struct {
Name string `json:"name"`
Version string `json:"version"`
Abbreviation string `json:"abbr"`
}
func (protocol *ProtocolSummary) ToString() string {
return fmt.Sprintf("%s?%s?%s", protocol.Name, protocol.Version, protocol.Abbreviation)
}
func GetProtocolSummary(inputString string) *ProtocolSummary {
splitted := strings.SplitN(inputString, "?", 3)
return &ProtocolSummary{
Name: splitted[0],
Version: splitted[1],
Abbreviation: splitted[2],
}
}
type Protocol struct {
Name string `json:"name"`
ProtocolSummary
LongName string `json:"longName"`
Abbreviation string `json:"abbr"`
Macro string `json:"macro"`
Version string `json:"version"`
BackgroundColor string `json:"backgroundColor"`
ForegroundColor string `json:"foregroundColor"`
FontSize int8 `json:"fontSize"`
@@ -91,7 +110,6 @@ type OutputChannelItem struct {
Timestamp int64
ConnectionInfo *ConnectionInfo
Pair *RequestResponsePair
Summary *BaseEntry
Namespace string
}
@@ -116,6 +134,7 @@ func (p *ReadProgress) Reset() {
type Dissector interface {
Register(*Extension)
GetProtocols() map[string]*Protocol
Ping()
Dissect(b *bufio.Reader, reader TcpReader, options *TrafficFilteringOptions) error
Analyze(item *OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *Entry
@@ -151,7 +170,7 @@ func (e *Emitting) Emit(item *OutputChannelItem) {
type Entry struct {
Id string `json:"id"`
Protocol Protocol `json:"proto"`
Protocol ProtocolSummary `json:"protocol"`
Capture Capture `json:"capture"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
@@ -164,40 +183,30 @@ type Entry struct {
RequestSize int `json:"requestSize"`
ResponseSize int `json:"responseSize"`
ElapsedTime int64 `json:"elapsedTime"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type EntryWrapper struct {
Protocol Protocol `json:"protocol"`
Representation string `json:"representation"`
Data *Entry `json:"data"`
Base *BaseEntry `json:"base"`
Rules []map[string]interface{} `json:"rulesMatched,omitempty"`
IsRulesEnabled bool `json:"isRulesEnabled"`
Protocol Protocol `json:"protocol"`
Representation string `json:"representation"`
Data *Entry `json:"data"`
Base *BaseEntry `json:"base"`
}
type BaseEntry struct {
Id string `json:"id"`
Protocol Protocol `json:"proto,omitempty"`
Capture Capture `json:"capture"`
Summary string `json:"summary,omitempty"`
SummaryQuery string `json:"summaryQuery,omitempty"`
Status int `json:"status"`
StatusQuery string `json:"statusQuery"`
Method string `json:"method,omitempty"`
MethodQuery string `json:"methodQuery,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type ApplicableRules struct {
Latency int64 `json:"latency,omitempty"`
Status bool `json:"status,omitempty"`
NumberOfRules int `json:"numberOfRules,omitempty"`
Id string `json:"id"`
Protocol Protocol `json:"proto,omitempty"`
Capture Capture `json:"capture"`
Summary string `json:"summary,omitempty"`
SummaryQuery string `json:"summaryQuery,omitempty"`
Status int `json:"status"`
StatusQuery string `json:"statusQuery"`
Method string `json:"method,omitempty"`
MethodQuery string `json:"methodQuery,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency"`
}
const (

View File

@@ -1,7 +1,5 @@
package api
type TrafficFilteringOptions struct {
IgnoredUserAgents []string
PlainTextMaskingRegexes []*SerializableRegexp
EnableRedaction bool
IgnoredUserAgents []string
}

View File

@@ -8,9 +8,7 @@ test-update: test-pull-bin
@MIZU_TEST=1 TEST_UPDATE=1 go test -v ./... -coverpkg=./... -coverprofile=coverage.out -covermode=atomic
test-pull-bin:
@mkdir -p bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp gs://static.up9.io/mizu/test-pcap/bin/amqp/\*.bin bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || ../get-folder-of-tests.sh bin/amqp bin
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect11/amqp/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || ../get-folder-of-tests.sh expect/amqp expect

View File

@@ -4,16 +4,20 @@ go 1.17
require (
github.com/stretchr/testify v1.7.0
github.com/up9inc/mizu/logger v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
)
require (
github.com/davecgh/go-spew v1.1.0 // indirect
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/up9inc/mizu/tap/dbgctl v0.0.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c // indirect
)
replace github.com/up9inc/mizu/logger v0.0.0 => ../../../logger
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api
replace github.com/up9inc/mizu/tap/dbgctl v0.0.0 => ../../dbgctl

View File

@@ -1,5 +1,7 @@
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 h1:lDH9UUVJtmYCjyT0CI4q8xvlXPxeZ0gYCVvWbmPlp88=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=

View File

@@ -5,8 +5,8 @@ import (
"fmt"
"sort"
"strconv"
"time"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/tap/api"
)
@@ -25,14 +25,14 @@ var connectionMethodMap = map[int]string{
61: "connection unblocked",
}
// var channelMethodMap = map[int]string{
// 10: "channel open",
// 11: "channel open-ok",
// 20: "channel flow",
// 21: "channel flow-ok",
// 40: "channel close",
// 41: "channel close-ok",
// }
var channelMethodMap = map[int]string{
10: "channel open",
11: "channel open-ok",
20: "channel flow",
21: "channel flow-ok",
40: "channel close",
41: "channel close-ok",
}
var exchangeMethodMap = map[int]string{
10: "exchange declare",
@@ -94,29 +94,41 @@ type AMQPWrapper struct {
Details interface{} `json:"details"`
}
func emitAMQP(event interface{}, _type string, method string, connectionInfo *api.ConnectionInfo, captureTime time.Time, captureSize int, emitter api.Emitter, capture api.Capture) {
request := &api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: event,
},
},
type emptyResponse struct {
}
const emptyMethod = "empty"
func getIdent(reader api.TcpReader, methodFrame *MethodFrame) (ident string) {
tcpID := reader.GetTcpID()
// To match methods to their Ok(s)
methodId := methodFrame.MethodId - methodFrame.MethodId%10
if reader.GetIsClient() {
ident = fmt.Sprintf(
"%s_%s_%s_%s_%d_%d_%d",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
methodFrame.ChannelId,
methodFrame.ClassId,
methodId,
)
} else {
ident = fmt.Sprintf(
"%s_%s_%s_%s_%d_%d_%d",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
methodFrame.ChannelId,
methodFrame.ClassId,
methodId,
)
}
item := &api.OutputChannelItem{
Protocol: protocol,
Capture: capture,
Timestamp: captureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: connectionInfo,
Pair: &api.RequestResponsePair{
Request: *request,
Response: api.GenericMessage{},
},
}
emitter.Emit(item)
return
}
func representProperties(properties map[string]interface{}, rep []interface{}) ([]interface{}, string, string) {
@@ -460,6 +472,36 @@ func representQueueDeclare(event map[string]interface{}) []interface{} {
return rep
}
func representQueueDeclareOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Queue",
Value: event["queue"].(string),
Selector: `response.queue`,
},
{
Name: "Message Count",
Value: fmt.Sprintf("%g", event["messageCount"].(float64)),
Selector: `response.messageCount`,
},
{
Name: "Consumer Count",
Value: fmt.Sprintf("%g", event["consumerCount"].(float64)),
Selector: `response.consumerCount`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representExchangeDeclare(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
@@ -571,7 +613,7 @@ func representConnectionStart(event map[string]interface{}) []interface{} {
x, _ := json.Marshal(value)
outcome = string(x)
default:
panic("Unknown data type for the server property!")
logger.Log.Info("Unknown data type for the server property!")
}
headers = append(headers, api.TableData{
Name: name,
@@ -593,6 +635,65 @@ func representConnectionStart(event map[string]interface{}) []interface{} {
return rep
}
func representConnectionStartOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Mechanism",
Value: event["mechanism"].(string),
Selector: `response.mechanism`,
},
{
Name: "Mechanism",
Value: event["mechanism"].(string),
Selector: `response.response`,
},
{
Name: "Locale",
Value: event["locale"].(string),
Selector: `response.locale`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
if event["clientProperties"] != nil {
headers := make([]api.TableData, 0)
for name, value := range event["clientProperties"].(map[string]interface{}) {
var outcome string
switch v := value.(type) {
case string:
outcome = v
case map[string]interface{}:
x, _ := json.Marshal(value)
outcome = string(x)
default:
logger.Log.Info("Unknown data type for the client property!")
}
headers = append(headers, api.TableData{
Name: name,
Value: outcome,
Selector: fmt.Sprintf(`response.clientProperties["%s"]`, name),
})
}
sort.Slice(headers, func(i, j int) bool {
return headers[i].Name < headers[j].Name
})
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Client Properties",
Data: string(headersMarshaled),
})
}
return rep
}
func representConnectionClose(event map[string]interface{}) []interface{} {
replyCode := ""
@@ -750,3 +851,122 @@ func representBasicConsume(event map[string]interface{}) []interface{} {
return rep
}
func representBasicConsumeOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Consumer Tag",
Value: event["consumerTag"].(string),
Selector: `response.consumerTag`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representConnectionOpen(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Virtual Host",
Value: event["virtualHost"].(string),
Selector: `request.virtualHost`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representConnectionTune(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Channel Max",
Value: fmt.Sprintf("%g", event["channelMax"].(float64)),
Selector: `request.channelMax`,
},
{
Name: "Frame Max",
Value: fmt.Sprintf("%g", event["frameMax"].(float64)),
Selector: `request.frameMax`,
},
{
Name: "Heartbeat",
Value: fmt.Sprintf("%g", event["heartbeat"].(float64)),
Selector: `request.heartbeat`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representBasicCancel(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Consumer Tag",
Value: event["consumerTag"].(string),
Selector: `response.consumerTag`,
},
{
Name: "NoWait",
Value: strconv.FormatBool(event["noWait"].(bool)),
Selector: `request.noWait`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representBasicCancelOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Consumer Tag",
Value: event["consumerTag"].(string),
Selector: `response.consumerTag`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representEmpty(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
return rep
}

View File

@@ -13,11 +13,13 @@ import (
)
var protocol = api.Protocol{
Name: "amqp",
ProtocolSummary: api.ProtocolSummary{
Name: "amqp",
Version: "0-9-1",
Abbreviation: "AMQP",
},
LongName: "Advanced Message Queuing Protocol 0-9-1",
Abbreviation: "AMQP",
Macro: "amqp",
Version: "0-9-1",
BackgroundColor: "#ff6600",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -26,32 +28,30 @@ var protocol = api.Protocol{
Priority: 1,
}
var protocolsMap = map[string]*api.Protocol{
protocol.ToString(): &protocol,
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", protocol.Name)
}
const amqpRequest string = "amqp_request"
func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.TrafficFilteringOptions) error {
r := AmqpReader{b}
var remaining int
var header *HeaderFrame
connectionInfo := &api.ConnectionInfo{
ClientIP: reader.GetTcpID().SrcIP,
ClientPort: reader.GetTcpID().SrcPort,
ServerIP: reader.GetTcpID().DstIP,
ServerPort: reader.GetTcpID().DstPort,
IsOutgoing: true,
}
eventBasicPublish := &BasicPublish{
Exchange: "",
RoutingKey: "",
@@ -73,6 +73,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
var lastMethodFrameMessage Message
var ident string
isClient := reader.GetIsClient()
reqResMatcher := reader.GetReqResMatcher().(*requestResponseMatcher)
for {
frameVal, err := r.readFrame()
if err == io.EOF {
@@ -111,16 +115,22 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
switch lastMethodFrameMessage.(type) {
case *BasicPublish:
eventBasicPublish.Body = f.Body
emitAMQP(*eventBasicPublish, amqpRequest, basicMethodMap[40], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[40], *eventBasicPublish, reader)
reqResMatcher.emitEvent(!isClient, ident, emptyMethod, &emptyResponse{}, reader)
case *BasicDeliver:
eventBasicDeliver.Body = f.Body
emitAMQP(*eventBasicDeliver, amqpRequest, basicMethodMap[60], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(!isClient, ident, basicMethodMap[60], *eventBasicDeliver, reader)
reqResMatcher.emitEvent(isClient, ident, emptyMethod, &emptyResponse{}, reader)
}
case *MethodFrame:
reader.GetParent().SetProtocol(&protocol)
lastMethodFrameMessage = f.Method
ident = getIdent(reader, f)
switch m := f.Method.(type) {
case *BasicPublish:
eventBasicPublish.Exchange = m.Exchange
@@ -136,7 +146,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventQueueBind, amqpRequest, queueMethodMap[20], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[20], *eventQueueBind, reader)
case *QueueBindOk:
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[21], m, reader)
case *BasicConsume:
eventBasicConsume := &BasicConsume{
@@ -148,7 +161,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventBasicConsume, amqpRequest, basicMethodMap[20], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[20], *eventBasicConsume, reader)
case *BasicConsumeOk:
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[21], m, reader)
case *BasicDeliver:
eventBasicDeliver.ConsumerTag = m.ConsumerTag
@@ -167,7 +183,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventQueueDeclare, amqpRequest, queueMethodMap[10], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[10], *eventQueueDeclare, reader)
case *QueueDeclareOk:
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[11], m, reader)
case *ExchangeDeclare:
eventExchangeDeclare := &ExchangeDeclare{
@@ -180,17 +199,19 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventExchangeDeclare, amqpRequest, exchangeMethodMap[10], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, exchangeMethodMap[10], *eventExchangeDeclare, reader)
case *ExchangeDeclareOk:
reqResMatcher.emitEvent(isClient, ident, exchangeMethodMap[11], m, reader)
case *ConnectionStart:
eventConnectionStart := &ConnectionStart{
VersionMajor: m.VersionMajor,
VersionMinor: m.VersionMinor,
ServerProperties: m.ServerProperties,
Mechanisms: m.Mechanisms,
Locales: m.Locales,
}
emitAMQP(*eventConnectionStart, amqpRequest, connectionMethodMap[10], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
// In our tests, *ConnectionStart does not result in *ConnectionStartOk
reqResMatcher.emitEvent(!isClient, ident, connectionMethodMap[10], m, reader)
reqResMatcher.emitEvent(isClient, ident, emptyMethod, &emptyResponse{}, reader)
case *ConnectionStartOk:
// In our tests, *ConnectionStart does not result in *ConnectionStartOk
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[11], m, reader)
case *ConnectionClose:
eventConnectionClose := &ConnectionClose{
@@ -199,7 +220,40 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
ClassId: m.ClassId,
MethodId: m.MethodId,
}
emitAMQP(*eventConnectionClose, amqpRequest, connectionMethodMap[50], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[50], *eventConnectionClose, reader)
case *ConnectionCloseOk:
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[51], m, reader)
case *connectionOpen:
eventConnectionOpen := &connectionOpen{
VirtualHost: m.VirtualHost,
}
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[40], *eventConnectionOpen, reader)
case *connectionOpenOk:
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[41], m, reader)
case *channelOpen:
reqResMatcher.emitEvent(isClient, ident, channelMethodMap[10], m, reader)
case *channelOpenOk:
reqResMatcher.emitEvent(isClient, ident, channelMethodMap[11], m, reader)
case *connectionTune:
// In our tests, *connectionTune does not result in *connectionTuneOk
reqResMatcher.emitEvent(!isClient, ident, connectionMethodMap[30], m, reader)
reqResMatcher.emitEvent(isClient, ident, emptyMethod, &emptyResponse{}, reader)
case *connectionTuneOk:
// In our tests, *connectionTune does not result in *connectionTuneOk
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[31], m, reader)
case *basicCancel:
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[30], m, reader)
case *basicCancelOk:
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[31], m, reader)
}
default:
@@ -210,11 +264,19 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *api.Entry {
request := item.Pair.Request.Payload.(map[string]interface{})
response := item.Pair.Response.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
elapsedTime = 0
}
reqDetails["method"] = request["method"]
resDetails["method"] = response["method"]
return &api.Entry{
Protocol: protocol,
Protocol: protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -226,13 +288,15 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
RequestSize: item.Pair.Request.CaptureSize,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: 0,
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
RequestSize: item.Pair.Request.CaptureSize,
ResponseSize: item.Pair.Response.CaptureSize,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: elapsedTime,
}
}
@@ -273,11 +337,26 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
case basicMethodMap[20]:
summary = entry.Request["queue"].(string)
summaryQuery = fmt.Sprintf(`request.queue == "%s"`, summary)
case connectionMethodMap[40]:
summary = entry.Request["virtualHost"].(string)
summaryQuery = fmt.Sprintf(`request.virtualHost == "%s"`, summary)
case connectionMethodMap[30]:
summary = fmt.Sprintf("%g", entry.Request["channelMax"].(float64))
summaryQuery = fmt.Sprintf(`request.channelMax == "%s"`, summary)
case connectionMethodMap[31]:
summary = fmt.Sprintf("%g", entry.Request["channelMax"].(float64))
summaryQuery = fmt.Sprintf(`request.channelMax == "%s"`, summary)
case basicMethodMap[30]:
summary = entry.Request["consumerTag"].(string)
summaryQuery = fmt.Sprintf(`request.consumerTag == "%s"`, summary)
case basicMethodMap[31]:
summary = entry.Request["consumerTag"].(string)
summaryQuery = fmt.Sprintf(`request.consumerTag == "%s"`, summary)
}
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -290,13 +369,14 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
func (d dissecting) Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, err error) {
representation := make(map[string]interface{})
var repRequest []interface{}
var repResponse []interface{}
switch request["method"].(string) {
case basicMethodMap[40]:
repRequest = representBasicPublish(request)
@@ -314,20 +394,56 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
repRequest = representQueueBind(request)
case basicMethodMap[20]:
repRequest = representBasicConsume(request)
case connectionMethodMap[40]:
repRequest = representConnectionOpen(request)
case channelMethodMap[10]:
repRequest = representEmpty(request)
case connectionMethodMap[30]:
repRequest = representConnectionTune(request)
case basicMethodMap[30]:
repRequest = representBasicCancel(request)
}
switch response["method"].(string) {
case queueMethodMap[11]:
repResponse = representQueueDeclareOk(response)
case exchangeMethodMap[11]:
repResponse = representEmpty(response)
case connectionMethodMap[11]:
repResponse = representConnectionStartOk(response)
case connectionMethodMap[51]:
repResponse = representEmpty(response)
case basicMethodMap[21]:
repResponse = representBasicConsumeOk(response)
case queueMethodMap[21]:
repResponse = representEmpty(response)
case connectionMethodMap[41]:
repResponse = representEmpty(response)
case channelMethodMap[11]:
repResponse = representEmpty(request)
case connectionMethodMap[31]:
repResponse = representConnectionTune(request)
case basicMethodMap[31]:
repResponse = representBasicCancelOk(request)
case emptyMethod:
repResponse = representEmpty(response)
}
representation["request"] = repRequest
representation["response"] = repResponse
object, err = json.Marshal(representation)
return
}
func (d dissecting) Macros() map[string]string {
return map[string]string{
`amqp`: fmt.Sprintf(`proto.name == "%s"`, protocol.Name),
`amqp`: fmt.Sprintf(`protocol.name == "%s"`, protocol.Name),
}
}
func (d dissecting) NewResponseRequestMatcher() api.RequestResponseMatcher {
return nil
return createResponseRequestMatcher()
}
var Dissector dissecting

View File

@@ -44,7 +44,7 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"amqp": `proto.name == "amqp"`,
"amqp": `protocol.name == "amqp"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -0,0 +1,113 @@
package amqp
import (
"sync"
"time"
"github.com/up9inc/mizu/tap/api"
)
// Key is {client_addr}_{client_port}_{dest_addr}_{dest_port}_{channel_id}_{class_id}_{method_id}
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func createResponseRequestMatcher() api.RequestResponseMatcher {
return &requestResponseMatcher{openMessagesMap: &sync.Map{}}
}
func (matcher *requestResponseMatcher) GetMap() *sync.Map {
return matcher.openMessagesMap
}
func (matcher *requestResponseMatcher) SetMaxTry(value int) {
}
func (matcher *requestResponseMatcher) emitEvent(isRequest bool, ident string, method string, event interface{}, reader api.TcpReader) {
reader.GetParent().SetProtocol(&protocol)
var item *api.OutputChannelItem
if isRequest {
item = matcher.registerRequest(ident, method, event, reader.GetCaptureTime(), reader.GetReadProgress().Current())
} else {
item = matcher.registerResponse(ident, method, event, reader.GetCaptureTime(), reader.GetReadProgress().Current())
}
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: reader.GetTcpID().SrcIP,
ClientPort: reader.GetTcpID().SrcPort,
ServerIP: reader.GetTcpID().DstIP,
ServerPort: reader.GetTcpID().DstPort,
IsOutgoing: true,
}
item.Capture = reader.GetParent().GetOrigin()
reader.GetEmitter().Emit(item)
}
}
func (matcher *requestResponseMatcher) registerRequest(ident string, method string, request interface{}, captureTime time.Time, captureSize int) *api.OutputChannelItem {
requestAMQPMessage := api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
CaptureSize: captureSize,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: request,
},
},
}
if response, found := matcher.openMessagesMap.LoadAndDelete(ident); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
responseAMQPMessage := response.(*api.GenericMessage)
if responseAMQPMessage.IsRequest {
return nil
}
return matcher.preparePair(&requestAMQPMessage, responseAMQPMessage)
}
matcher.openMessagesMap.Store(ident, &requestAMQPMessage)
return nil
}
func (matcher *requestResponseMatcher) registerResponse(ident string, method string, response interface{}, captureTime time.Time, captureSize int) *api.OutputChannelItem {
responseAMQPMessage := api.GenericMessage{
IsRequest: false,
CaptureTime: captureTime,
CaptureSize: captureSize,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: response,
},
},
}
if request, found := matcher.openMessagesMap.LoadAndDelete(ident); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
requestAMQPMessage := request.(*api.GenericMessage)
if !requestAMQPMessage.IsRequest {
return nil
}
return matcher.preparePair(requestAMQPMessage, &responseAMQPMessage)
}
matcher.openMessagesMap.Store(ident, &responseAMQPMessage)
return nil
}
func (matcher *requestResponseMatcher) preparePair(requestAMQPMessage *api.GenericMessage, responseAMQPMessage *api.GenericMessage) *api.OutputChannelItem {
return &api.OutputChannelItem{
Protocol: protocol,
Timestamp: requestAMQPMessage.CaptureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: nil,
Pair: &api.RequestResponsePair{
Request: *requestAMQPMessage,
Response: *responseAMQPMessage,
},
}
}

View File

@@ -81,10 +81,10 @@ func (msg *ConnectionStart) read(r io.Reader) (err error) {
}
type ConnectionStartOk struct {
ClientProperties Table
Mechanism string
Response string
Locale string
ClientProperties Table `json:"clientProperties"`
Mechanism string `json:"mechanism"`
Response string `json:"response"`
Locale string `json:"locale"`
}
func (msg *ConnectionStartOk) read(r io.Reader) (err error) {
@@ -135,9 +135,9 @@ func (msg *connectionSecureOk) read(r io.Reader) (err error) {
}
type connectionTune struct {
ChannelMax uint16
FrameMax uint32
Heartbeat uint16
ChannelMax uint16 `json:"channelMax"`
FrameMax uint32 `json:"frameMax"`
Heartbeat uint16 `json:"heartbeat"`
}
func (msg *connectionTune) read(r io.Reader) (err error) {
@@ -181,7 +181,7 @@ func (msg *connectionTuneOk) read(r io.Reader) (err error) {
}
type connectionOpen struct {
VirtualHost string
VirtualHost string `json:"virtualHost"`
reserved1 string
reserved2 bool
}
@@ -580,9 +580,9 @@ func (msg *QueueDeclare) read(r io.Reader) (err error) {
}
type QueueDeclareOk struct {
Queue string
MessageCount uint32
ConsumerCount uint32
Queue string `json:"queue"`
MessageCount uint32 `json:"messageCount"`
ConsumerCount uint32 `json:"consumerCount"`
}
func (msg *QueueDeclareOk) read(r io.Reader) (err error) {
@@ -840,7 +840,7 @@ func (msg *BasicConsume) read(r io.Reader) (err error) {
}
type BasicConsumeOk struct {
ConsumerTag string
ConsumerTag string `json:"consumerTag"`
}
func (msg *BasicConsumeOk) read(r io.Reader) (err error) {
@@ -853,8 +853,8 @@ func (msg *BasicConsumeOk) read(r io.Reader) (err error) {
}
type basicCancel struct {
ConsumerTag string
NoWait bool
ConsumerTag string `json:"consumerTag"`
NoWait bool `json:"noWait"`
}
func (msg *basicCancel) read(r io.Reader) (err error) {
@@ -873,7 +873,7 @@ func (msg *basicCancel) read(r io.Reader) (err error) {
}
type basicCancelOk struct {
ConsumerTag string
ConsumerTag string `json:"consumerTag"`
}
func (msg *basicCancelOk) read(r io.Reader) (err error) {

View File

@@ -0,0 +1,24 @@
mizu_test_files_repo=https://github.com/up9inc/mizu-tests-data
mizu_test_files_tmp_folder="mizu-test-files-tmp"
requested_folder=$1
destination_folder_name=$2
echo "Going to download folder (${requested_folder}) from repo (${mizu_test_files_repo}) and save it to local folder (${destination_folder_name})"
echo "Cloning repo to tmp folder (${mizu_test_files_tmp_folder})"
git clone ${mizu_test_files_repo} ${mizu_test_files_tmp_folder} --no-checkout --depth 1 --filter=blob:none --sparse
cd ${mizu_test_files_tmp_folder}
echo "Adding sparse checkout folder"
git sparse-checkout add ${requested_folder}
echo "Checkout"
git checkout
echo "Moving folder to the destination location"
mv ${requested_folder} ../${destination_folder_name}
cd ..
echo "Removing the tmp folder"
rm -rf ${mizu_test_files_tmp_folder}

View File

@@ -8,9 +8,7 @@ test-update: test-pull-bin
@MIZU_TEST=1 TEST_UPDATE=1 go test -v ./... -coverpkg=./... -coverprofile=coverage.out -covermode=atomic
test-pull-bin:
@mkdir -p bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp gs://static.up9.io/mizu/test-pcap/bin/http/\*.bin bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || ../get-folder-of-tests.sh bin/http bin
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect12/http/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || ../get-folder-of-tests.sh expect/http expect

View File

@@ -18,10 +18,6 @@ func filterAndEmit(item *api.OutputChannelItem, emitter api.Emitter, options *ap
return
}
if options.EnableRedaction {
FilterSensitiveData(item, options)
}
replaceForwardedFor(item)
emitter.Emit(item)

View File

@@ -6,13 +6,16 @@ import (
"reflect"
"sort"
"strconv"
"strings"
"github.com/up9inc/mizu/tap/api"
)
func mapSliceRebuildAsMap(mapSlice []interface{}) (newMap map[string]interface{}) {
newMap = make(map[string]interface{})
for _, item := range mapSlice {
mergedMapSlice := mapSliceMergeRepeatedKeys(mapSlice)
for _, item := range mergedMapSlice {
h := item.(map[string]interface{})
newMap[h["name"].(string)] = h["value"]
}
@@ -20,6 +23,28 @@ func mapSliceRebuildAsMap(mapSlice []interface{}) (newMap map[string]interface{}
return
}
func mapSliceRebuildAsMergedMap(mapSlice []interface{}) (newMap map[string]interface{}) {
newMap = make(map[string]interface{})
mergedMapSlice := mapSliceMergeRepeatedKeys(mapSlice)
for _, item := range mergedMapSlice {
h := item.(map[string]interface{})
if valuesInterface, ok := h["value"].([]interface{}); ok {
var values []string
for _, valueInterface := range valuesInterface {
values = append(values, valueInterface.(string))
}
newMap[h["name"].(string)] = strings.Join(values, ",")
} else {
newMap[h["name"].(string)] = h["value"]
}
}
return
}
func mapSliceMergeRepeatedKeys(mapSlice []interface{}) (newMapSlice []interface{}) {
newMapSlice = make([]interface{}, 0)
valuesMap := make(map[string][]interface{})
@@ -47,6 +72,24 @@ func mapSliceMergeRepeatedKeys(mapSlice []interface{}) (newMapSlice []interface{
return
}
func representMapAsTable(mapToTable map[string]interface{}, selectorPrefix string) (representation string) {
var table []api.TableData
keys := make([]string, 0, len(mapToTable))
for k := range mapToTable {
keys = append(keys, k)
}
sort.Strings(keys)
for _, key := range keys {
table = append(table, createTableForKey(key, mapToTable[key], selectorPrefix)...)
}
obj, _ := json.Marshal(table)
representation = string(obj)
return
}
func representMapSliceAsTable(mapSlice []interface{}, selectorPrefix string) (representation string) {
var table []api.TableData
for _, item := range mapSlice {
@@ -54,34 +97,7 @@ func representMapSliceAsTable(mapSlice []interface{}, selectorPrefix string) (re
key := h["name"].(string)
value := h["value"]
var reflectKind reflect.Kind
reflectType := reflect.TypeOf(value)
if reflectType == nil {
reflectKind = reflect.Interface
} else {
reflectKind = reflect.TypeOf(value).Kind()
}
switch reflectKind {
case reflect.Slice:
fallthrough
case reflect.Array:
for i, el := range value.([]interface{}) {
selector := fmt.Sprintf("%s.%s[%d]", selectorPrefix, key, i)
table = append(table, api.TableData{
Name: fmt.Sprintf("%s [%d]", key, i),
Value: el,
Selector: selector,
})
}
default:
selector := fmt.Sprintf("%s[\"%s\"]", selectorPrefix, key)
table = append(table, api.TableData{
Name: key,
Value: value,
Selector: selector,
})
}
table = append(table, createTableForKey(key, value, selectorPrefix)...)
}
obj, _ := json.Marshal(table)
@@ -89,6 +105,41 @@ func representMapSliceAsTable(mapSlice []interface{}, selectorPrefix string) (re
return
}
func createTableForKey(key string, value interface{}, selectorPrefix string) []api.TableData {
var table []api.TableData
var reflectKind reflect.Kind
reflectType := reflect.TypeOf(value)
if reflectType == nil {
reflectKind = reflect.Interface
} else {
reflectKind = reflect.TypeOf(value).Kind()
}
switch reflectKind {
case reflect.Slice:
fallthrough
case reflect.Array:
for i, el := range value.([]interface{}) {
selector := fmt.Sprintf("%s.%s[%d]", selectorPrefix, key, i)
table = append(table, api.TableData{
Name: fmt.Sprintf("%s [%d]", key, i),
Value: el,
Selector: selector,
})
}
default:
selector := fmt.Sprintf("%s[\"%s\"]", selectorPrefix, key)
table = append(table, api.TableData{
Name: key,
Value: value,
Selector: selector,
})
}
return table
}
func representSliceAsTable(slice []interface{}, selectorPrefix string) (representation string) {
var table []api.TableData
for i, item := range slice {

View File

@@ -15,11 +15,13 @@ import (
)
var http10protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "1.0",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.0",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.0",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -29,11 +31,13 @@ var http10protocol = api.Protocol{
}
var http11protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -43,11 +47,13 @@ var http11protocol = api.Protocol{
}
var http2Protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "2.0",
Abbreviation: "HTTP/2",
},
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2)",
Abbreviation: "HTTP/2",
Macro: "http2",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -57,11 +63,13 @@ var http2Protocol = api.Protocol{
}
var grpcProtocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "2.0",
Abbreviation: "gRPC",
},
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) [ gRPC over HTTP/2 ]",
Abbreviation: "gRPC",
Macro: "grpc",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -71,11 +79,13 @@ var grpcProtocol = api.Protocol{
}
var graphQL1Protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "GQL",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1 [ GraphQL over HTTP/1.1 ]",
Abbreviation: "GQL",
Macro: "gql",
Version: "1.1",
BackgroundColor: "#e10098",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -85,11 +95,13 @@ var graphQL1Protocol = api.Protocol{
}
var graphQL2Protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "2.0",
Abbreviation: "GQL",
},
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) [ GraphQL over HTTP/2 ]",
Abbreviation: "GQL",
Macro: "gql",
Version: "2.0",
BackgroundColor: "#e10098",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -98,6 +110,15 @@ var graphQL2Protocol = api.Protocol{
Priority: 0,
}
var protocolsMap = map[string]*api.Protocol{
http10protocol.ToString(): &http10protocol,
http11protocol.ToString(): &http11protocol,
http2Protocol.ToString(): &http2Protocol,
grpcProtocol.ToString(): &grpcProtocol,
graphQL1Protocol.ToString(): &graphQL1Protocol,
graphQL2Protocol.ToString(): &graphQL2Protocol,
}
const (
TypeHttpRequest = iota
TypeHttpResponse
@@ -109,6 +130,10 @@ func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &http11protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", http11protocol.Name)
}
@@ -261,19 +286,13 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
reqDetails["pathSegments"] = strings.Split(path, "/")[1:]
// Rearrange the maps for the querying
reqDetails["_headers"] = reqDetails["headers"]
reqDetails["headers"] = mapSliceRebuildAsMap(reqDetails["_headers"].([]interface{}))
resDetails["_headers"] = resDetails["headers"]
resDetails["headers"] = mapSliceRebuildAsMap(resDetails["_headers"].([]interface{}))
reqDetails["headers"] = mapSliceRebuildAsMergedMap(reqDetails["headers"].([]interface{}))
resDetails["headers"] = mapSliceRebuildAsMergedMap(resDetails["headers"].([]interface{}))
reqDetails["_cookies"] = reqDetails["cookies"]
reqDetails["cookies"] = mapSliceRebuildAsMap(reqDetails["_cookies"].([]interface{}))
resDetails["_cookies"] = resDetails["cookies"]
resDetails["cookies"] = mapSliceRebuildAsMap(resDetails["_cookies"].([]interface{}))
reqDetails["cookies"] = mapSliceRebuildAsMergedMap(reqDetails["cookies"].([]interface{}))
resDetails["cookies"] = mapSliceRebuildAsMergedMap(resDetails["cookies"].([]interface{}))
reqDetails["_queryString"] = reqDetails["queryString"]
reqDetails["_queryStringMerged"] = mapSliceMergeRepeatedKeys(reqDetails["_queryString"].([]interface{}))
reqDetails["queryString"] = mapSliceRebuildAsMap(reqDetails["_queryStringMerged"].([]interface{}))
reqDetails["queryString"] = mapSliceRebuildAsMap(reqDetails["queryString"].([]interface{}))
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
@@ -281,7 +300,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
}
return &api.Entry{
Protocol: item.Protocol,
Protocol: item.Protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -315,7 +334,7 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -328,7 +347,6 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
@@ -373,19 +391,19 @@ func representRequest(request map[string]interface{}) (repRequest []interface{})
repRequest = append(repRequest, api.SectionData{
Type: api.TABLE,
Title: "Headers",
Data: representMapSliceAsTable(request["_headers"].([]interface{}), `request.headers`),
Data: representMapAsTable(request["headers"].(map[string]interface{}), `request.headers`),
})
repRequest = append(repRequest, api.SectionData{
Type: api.TABLE,
Title: "Cookies",
Data: representMapSliceAsTable(request["_cookies"].([]interface{}), `request.cookies`),
Data: representMapAsTable(request["cookies"].(map[string]interface{}), `request.cookies`),
})
repRequest = append(repRequest, api.SectionData{
Type: api.TABLE,
Title: "Query String",
Data: representMapSliceAsTable(request["_queryStringMerged"].([]interface{}), `request.queryString`),
Data: representMapAsTable(request["queryString"].(map[string]interface{}), `request.queryString`),
})
postData, _ := request["postData"].(map[string]interface{})
@@ -461,13 +479,13 @@ func representResponse(response map[string]interface{}) (repResponse []interface
repResponse = append(repResponse, api.SectionData{
Type: api.TABLE,
Title: "Headers",
Data: representMapSliceAsTable(response["_headers"].([]interface{}), `response.headers`),
Data: representMapAsTable(response["headers"].(map[string]interface{}), `response.headers`),
})
repResponse = append(repResponse, api.SectionData{
Type: api.TABLE,
Title: "Cookies",
Data: representMapSliceAsTable(response["_cookies"].([]interface{}), `response.cookies`),
Data: representMapAsTable(response["cookies"].(map[string]interface{}), `response.cookies`),
})
content, _ := response["content"].(map[string]interface{})
@@ -503,10 +521,10 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
func (d dissecting) Macros() map[string]string {
return map[string]string{
`http`: fmt.Sprintf(`proto.name == "%s" and proto.version.startsWith("%c")`, http11protocol.Name, http11protocol.Version[0]),
`http2`: fmt.Sprintf(`proto.name == "%s" and proto.version == "%s"`, http11protocol.Name, http2Protocol.Version),
`grpc`: fmt.Sprintf(`proto.name == "%s" and proto.version == "%s" and proto.macro == "%s"`, http11protocol.Name, grpcProtocol.Version, grpcProtocol.Macro),
`gql`: fmt.Sprintf(`proto.name == "%s" and proto.macro == "%s"`, graphQL1Protocol.Name, graphQL1Protocol.Macro),
`http`: fmt.Sprintf(`protocol.abbr == "%s"`, http11protocol.Abbreviation),
`http2`: fmt.Sprintf(`protocol.abbr == "%s"`, http2Protocol.Abbreviation),
`grpc`: fmt.Sprintf(`protocol.abbr == "%s"`, grpcProtocol.Abbreviation),
`gql`: fmt.Sprintf(`protocol.abbr == "%s"`, graphQL1Protocol.Abbreviation),
}
}

View File

@@ -44,10 +44,10 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"http": `proto.name == "http" and proto.version.startsWith("1")`,
"http2": `proto.name == "http" and proto.version == "2.0"`,
"grpc": `proto.name == "http" and proto.version == "2.0" and proto.macro == "grpc"`,
"gql": `proto.name == "http" and proto.macro == "gql"`,
"http": `protocol.abbr == "HTTP"`,
"http2": `protocol.abbr == "HTTP/2"`,
"grpc": `protocol.abbr == "gRPC"`,
"gql": `protocol.abbr == "GQL"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -1,30 +1,14 @@
package http
import (
"bytes"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/beevik/etree"
"github.com/up9inc/mizu/tap/api"
)
const maskedFieldPlaceholderValue = "[REDACTED]"
const userAgent = "user-agent"
//these values MUST be all lower case and contain no `-` or `_` characters
var personallyIdentifiableDataFields = []string{"token", "authorization", "authentication", "cookie", "userid", "password",
"username", "user", "key", "passcode", "pass", "auth", "authtoken", "jwt",
"bearer", "clientid", "clientsecret", "redirecturi", "phonenumber",
"zip", "zipcode", "address", "country", "firstname", "lastname",
"middlename", "fname", "lname", "birthdate"}
func IsIgnoredUserAgent(item *api.OutputChannelItem, options *api.TrafficFilteringOptions) bool {
if item.Protocol.Name != "http" {
return false
@@ -48,192 +32,3 @@ func IsIgnoredUserAgent(item *api.OutputChannelItem, options *api.TrafficFilteri
return false
}
func FilterSensitiveData(item *api.OutputChannelItem, options *api.TrafficFilteringOptions) {
request := item.Pair.Request.Payload.(HTTPPayload).Data.(*http.Request)
response := item.Pair.Response.Payload.(HTTPPayload).Data.(*http.Response)
filterHeaders(&request.Header)
filterHeaders(&response.Header)
filterUrl(request.URL)
filterRequestBody(request, options)
filterResponseBody(response, options)
}
func filterRequestBody(request *http.Request, options *api.TrafficFilteringOptions) {
contenType := getContentTypeHeaderValue(request.Header)
body, err := ioutil.ReadAll(request.Body)
if err != nil {
return
}
filteredBody, err := filterHttpBody(body, contenType, options)
if err == nil {
request.Body = ioutil.NopCloser(bytes.NewBuffer(filteredBody))
} else {
request.Body = ioutil.NopCloser(bytes.NewBuffer(body))
}
}
func filterResponseBody(response *http.Response, options *api.TrafficFilteringOptions) {
contentType := getContentTypeHeaderValue(response.Header)
body, err := ioutil.ReadAll(response.Body)
if err != nil {
return
}
filteredBody, err := filterHttpBody(body, contentType, options)
if err == nil {
response.Body = ioutil.NopCloser(bytes.NewBuffer(filteredBody))
} else {
response.Body = ioutil.NopCloser(bytes.NewBuffer(body))
}
}
func filterHeaders(headers *http.Header) {
for key := range *headers {
if strings.ToLower(key) == userAgent {
continue
}
if strings.ToLower(key) == "cookie" {
headers.Del(key)
} else if isFieldNameSensitive(key) {
headers.Set(key, maskedFieldPlaceholderValue)
}
}
}
func getContentTypeHeaderValue(headers http.Header) string {
for key := range headers {
if strings.ToLower(key) == "content-type" {
return headers.Get(key)
}
}
return ""
}
func isFieldNameSensitive(fieldName string) bool {
if fieldName == ":authority" {
return false
}
name := strings.ToLower(fieldName)
name = strings.ReplaceAll(name, "_", "")
name = strings.ReplaceAll(name, "-", "")
name = strings.ReplaceAll(name, " ", "")
for _, sensitiveField := range personallyIdentifiableDataFields {
if strings.Contains(name, sensitiveField) {
return true
}
}
return false
}
func filterHttpBody(bytes []byte, contentType string, options *api.TrafficFilteringOptions) ([]byte, error) {
mimeType := strings.Split(contentType, ";")[0]
switch strings.ToLower(mimeType) {
case "application/json":
return filterJsonBody(bytes)
case "text/html":
fallthrough
case "application/xhtml+xml":
fallthrough
case "text/xml":
fallthrough
case "application/xml":
return filterXmlEtree(bytes)
case "text/plain":
if options != nil && options.PlainTextMaskingRegexes != nil {
return filterPlainText(bytes, options), nil
}
}
return bytes, nil
}
func filterPlainText(bytes []byte, options *api.TrafficFilteringOptions) []byte {
for _, regex := range options.PlainTextMaskingRegexes {
bytes = regex.ReplaceAll(bytes, []byte(maskedFieldPlaceholderValue))
}
return bytes
}
func filterXmlEtree(bytes []byte) ([]byte, error) {
if !IsValidXML(bytes) {
return nil, errors.New("Invalid XML")
}
xmlDoc := etree.NewDocument()
err := xmlDoc.ReadFromBytes(bytes)
if err != nil {
return nil, err
} else {
filterXmlElement(xmlDoc.Root())
}
return xmlDoc.WriteToBytes()
}
func IsValidXML(data []byte) bool {
return xml.Unmarshal(data, new(interface{})) == nil
}
func filterXmlElement(element *etree.Element) {
for i, attribute := range element.Attr {
if isFieldNameSensitive(attribute.Key) {
element.Attr[i].Value = maskedFieldPlaceholderValue
}
}
if element.ChildElements() == nil || len(element.ChildElements()) == 0 {
if isFieldNameSensitive(element.Tag) {
element.SetText(maskedFieldPlaceholderValue)
}
} else {
for _, element := range element.ChildElements() {
filterXmlElement(element)
}
}
}
func filterJsonBody(bytes []byte) ([]byte, error) {
var bodyJsonMap map[string]interface{}
err := json.Unmarshal(bytes, &bodyJsonMap)
if err != nil {
return nil, err
}
filterJsonMap(bodyJsonMap)
return json.Marshal(bodyJsonMap)
}
func filterJsonMap(jsonMap map[string]interface{}) {
for key, value := range jsonMap {
// Do not replace nil values with maskedFieldPlaceholderValue
if value == nil {
continue
}
nestedMap, isNested := value.(map[string]interface{})
if isNested {
filterJsonMap(nestedMap)
} else {
if isFieldNameSensitive(key) {
jsonMap[key] = maskedFieldPlaceholderValue
}
}
}
}
func filterUrl(url *url.URL) {
if len(url.RawQuery) > 0 {
newQueryArgs := make([]string, 0)
for urlQueryParamName, urlQueryParamValues := range url.Query() {
newValues := urlQueryParamValues
if isFieldNameSensitive(urlQueryParamName) {
newValues = []string{maskedFieldPlaceholderValue}
}
for _, paramValue := range newValues {
newQueryArgs = append(newQueryArgs, fmt.Sprintf("%s=%s", urlQueryParamName, paramValue))
}
}
url.RawQuery = strings.Join(newQueryArgs, "&")
}
}

View File

@@ -8,9 +8,7 @@ test-update: test-pull-bin
@MIZU_TEST=1 TEST_UPDATE=1 go test -v ./... -coverpkg=./... -coverprofile=coverage.out -covermode=atomic
test-pull-bin:
@mkdir -p bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp gs://static.up9.io/mizu/test-pcap/bin/kafka/\*.bin bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || ../get-folder-of-tests.sh bin/kafka bin
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect11/kafka/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || ../get-folder-of-tests.sh expect/kafka expect

View File

@@ -11,11 +11,13 @@ import (
)
var _protocol = api.Protocol{
Name: "kafka",
ProtocolSummary: api.ProtocolSummary{
Name: "kafka",
Version: "12",
Abbreviation: "KAFKA",
},
LongName: "Apache Kafka Protocol",
Abbreviation: "KAFKA",
Macro: "kafka",
Version: "12",
BackgroundColor: "#000000",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -24,12 +26,20 @@ var _protocol = api.Protocol{
Priority: 2,
}
var protocolsMap = map[string]*api.Protocol{
_protocol.ToString(): &_protocol,
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &_protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", _protocol.Name)
}
@@ -62,7 +72,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
elapsedTime = 0
}
return &api.Entry{
Protocol: _protocol,
Protocol: _protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -187,7 +197,7 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -200,7 +210,6 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
@@ -243,7 +252,7 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
func (d dissecting) Macros() map[string]string {
return map[string]string{
`kafka`: fmt.Sprintf(`proto.name == "%s"`, _protocol.Name),
`kafka`: fmt.Sprintf(`protocol.name == "%s"`, _protocol.Name),
}
}

View File

@@ -44,7 +44,7 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"kafka": `proto.name == "kafka"`,
"kafka": `protocol.name == "kafka"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -8,9 +8,7 @@ test-update: test-pull-bin
@MIZU_TEST=1 TEST_UPDATE=1 go test -v ./... -coverpkg=./... -coverprofile=coverage.out -covermode=atomic
test-pull-bin:
@mkdir -p bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp gs://static.up9.io/mizu/test-pcap/bin/redis/\*.bin bin
@[ "${skipbin}" ] && echo "Skipping downloading BINs" || ../get-folder-of-tests.sh bin/redis bin
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect11/redis/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || ../get-folder-of-tests.sh expect/redis expect

View File

@@ -11,11 +11,13 @@ import (
)
var protocol = api.Protocol{
Name: "redis",
ProtocolSummary: api.ProtocolSummary{
Name: "redis",
Version: "3.x",
Abbreviation: "REDIS",
},
LongName: "Redis Serialization Protocol",
Abbreviation: "REDIS",
Macro: "redis",
Version: "3.x",
BackgroundColor: "#a41e11",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -24,12 +26,20 @@ var protocol = api.Protocol{
Priority: 3,
}
var protocolsMap = map[string]*api.Protocol{
protocol.ToString(): &protocol,
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", protocol.Name)
}
@@ -70,7 +80,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
elapsedTime = 0
}
return &api.Entry{
Protocol: protocol,
Protocol: protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -115,7 +125,7 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -128,7 +138,6 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
@@ -144,7 +153,7 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
func (d dissecting) Macros() map[string]string {
return map[string]string{
`redis`: fmt.Sprintf(`proto.name == "%s"`, protocol.Name),
`redis`: fmt.Sprintf(`protocol.name == "%s"`, protocol.Name),
}
}

View File

@@ -45,7 +45,7 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"redis": `proto.name == "redis"`,
"redis": `protocol.name == "redis"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -4,11 +4,12 @@ go 1.17
require (
github.com/Masterminds/semver v1.5.0
github.com/cilium/ebpf v0.8.1
github.com/cilium/ebpf v0.9.0
github.com/go-errors/errors v1.4.2
github.com/google/gopacket v1.1.19
github.com/hashicorp/golang-lru v0.5.4
github.com/knightsc/gapstone v0.0.0-20191231144527-6fa5afaf11a9
github.com/moby/moby v20.10.17+incompatible
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/struCoder/pidusage v0.2.1
github.com/up9inc/mizu/logger v0.0.0
@@ -28,6 +29,7 @@ require (
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/tklauser/go-sysconf v0.3.10 // indirect
github.com/tklauser/numcpus v0.4.0 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
@@ -36,6 +38,7 @@ require (
golang.org/x/text v0.3.7 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gotest.tools/v3 v3.3.0 // indirect
k8s.io/apimachinery v0.23.3 // indirect
k8s.io/klog/v2 v2.40.1 // indirect
k8s.io/utils v0.0.0-20220127004650-9b3446523e65 // indirect

View File

@@ -7,8 +7,8 @@ github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbt
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cilium/ebpf v0.8.1 h1:bLSSEbBLqGPXxls55pGr5qWZaTqcmfDJHhou7t254ao=
github.com/cilium/ebpf v0.8.1/go.mod h1:f5zLIM0FSNuAkSyLAN7X+Hy6yznlF1mNiWUMfxMtrgk=
github.com/cilium/ebpf v0.9.0 h1:ldiV+FscPCQ/p3mNEV4O02EPbUZJFsoEtHvIr9xLTvk=
github.com/cilium/ebpf v0.9.0/go.mod h1:+OhNOIXx/Fnu1IE8bJz2dzOA+VSfyTfdNUVdlQnxUFY=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -94,6 +94,8 @@ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/moby v20.10.17+incompatible h1:TJJfyk2fLEgK+RzqVpFNkDkm0oEi+MLUfwt9lEYnp5g=
github.com/moby/moby v20.10.17+incompatible/go.mod h1:fDXVQ6+S340veQPv35CzDahGBmHsiclFwfEygB/TWMc=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
@@ -123,11 +125,15 @@ github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBO
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
@@ -186,12 +192,14 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -221,6 +229,7 @@ golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapK
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -269,6 +278,8 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.3.0 h1:MfDY1b1/0xN1CyMlQDac0ziEy9zJQd9CXBRRDHw2jJo=
gotest.tools/v3 v3.3.0/go.mod h1:Mcr9QNxkg0uMvy/YElmo4SpXgJKWgQvYrT7Kw5RzJ1A=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/api v0.23.3 h1:KNrME8KHGr12Ozjf8ytOewKzZh6hl/hHUZeHddT3a38=

View File

@@ -85,7 +85,7 @@ func NewTcpAssembler(outputItems chan *api.OutputChannelItem, streamsMap api.Tcp
maxBufferedPagesTotal := GetMaxBufferedPagesPerConnection()
maxBufferedPagesPerConnection := GetMaxBufferedPagesTotal()
logger.Log.Infof("Assembler options: maxBufferedPagesTotal=%d, maxBufferedPagesPerConnection=%d, opts=%v",
logger.Log.Infof("Assembler options: maxBufferedPagesTotal=%d, maxBufferedPagesPerConnection=%d, opts=%+v",
maxBufferedPagesTotal, maxBufferedPagesPerConnection, opts)
a.Assembler.AssemblerOptions.MaxBufferedPagesTotal = maxBufferedPagesTotal
a.Assembler.AssemblerOptions.MaxBufferedPagesPerConnection = maxBufferedPagesPerConnection

View File

@@ -9,7 +9,7 @@ docker build -t mizu-ebpf-builder . || exit 1
BPF_TARGET=amd64
BPF_CFLAGS="-O2 -g -D__TARGET_ARCH_x86"
ARCH=$(uname -m)
if [[ $ARCH == "aarch64" ]]; then
if [[ $ARCH == "aarch64" || $ARCH == "arm64" ]]; then
BPF_TARGET=arm64
BPF_CFLAGS="-O2 -g -D__TARGET_ARCH_arm64"
fi
@@ -18,10 +18,10 @@ docker run --rm \
--name mizu-ebpf-builder \
-v $MIZU_HOME:/mizu \
-v $(go env GOPATH):/root/go \
-it mizu-ebpf-builder \
mizu-ebpf-builder \
sh -c "
BPF_TARGET=\"$BPF_TARGET\" BPF_CFLAGS=\"$BPF_CFLAGS\" go generate tap/tlstapper/tls_tapper.go
chown $(id -u):$(id -g) tap/tlstapper/tlstapper_bpf*
chown $(id -u):$(id -g) tap/tlstapper/tlstapper*_bpf*
" || exit 1
popd

View File

@@ -12,23 +12,20 @@ Copyright (C) UP9 Inc.
#include "include/common.h"
static __always_inline int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd) {
static __always_inline int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd, struct ssl_info* info) {
__u32 pid = id >> 32;
__u64 key = (__u64) pid << 32 | fd;
struct fd_info *fdinfo = bpf_map_lookup_elem(&file_descriptor_to_ipv4, &key);
conn_flags *flags = bpf_map_lookup_elem(&connection_context, &key);
if (fdinfo == NULL) {
// Happens when we don't catch the connect / accept (if the connection is created before tapping is started)
if (flags == NULL) {
return 0;
}
int err = bpf_probe_read(chunk->address, sizeof(chunk->address), fdinfo->ipv4_addr);
chunk->flags |= (fdinfo->flags & FLAGS_IS_CLIENT_BIT);
chunk->flags |= (*flags & FLAGS_IS_CLIENT_BIT);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FD_ADDRESS, id, err, 0l);
return 0;
}
bpf_probe_read(&chunk->address_info, sizeof(chunk->address_info), &info->address_info);
return 1;
}
@@ -104,7 +101,7 @@ static __always_inline void output_ssl_chunk(struct pt_regs *ctx, struct ssl_inf
chunk->len = count_bytes;
chunk->fd = info->fd;
if (!add_address_to_chunk(ctx, chunk, id, chunk->fd)) {
if (!add_address_to_chunk(ctx, chunk, id, chunk->fd, info)) {
// Without an address, we drop the chunk because there is not much to do with it in Go
//
return;

View File

@@ -14,7 +14,6 @@ Copyright (C) UP9 Inc.
#define IPV4_ADDR_LEN (16)
struct accept_info {
__u64* sockaddr;
__u32* addrlen;
};
@@ -39,7 +38,6 @@ void sys_enter_accept4(struct sys_enter_accept4_ctx *ctx) {
struct accept_info info = {};
info.sockaddr = ctx->sockaddr;
info.addrlen = ctx->addrlen;
long err = bpf_map_update_elem(&accept_syscall_context, &id, &info, BPF_ANY);
@@ -94,26 +92,21 @@ void sys_exit_accept4(struct sys_exit_accept4_ctx *ctx) {
return;
}
struct fd_info fdinfo = {
.flags = 0
};
bpf_probe_read(fdinfo.ipv4_addr, sizeof(fdinfo.ipv4_addr), info.sockaddr);
conn_flags flags = 0;
__u32 pid = id >> 32;
__u32 fd = (__u32) ctx->ret;
__u64 key = (__u64) pid << 32 | fd;
err = bpf_map_update_elem(&file_descriptor_to_ipv4, &key, &fdinfo, BPF_ANY);
err = bpf_map_update_elem(&connection_context, &key, &flags, BPF_ANY);
if (err != 0) {
log_error(ctx, LOG_ERROR_PUTTING_FD_MAPPING, id, err, ORIGIN_SYS_EXIT_ACCEPT4_CODE);
log_error(ctx, LOG_ERROR_PUTTING_CONNECTION_CONTEXT, id, err, ORIGIN_SYS_EXIT_ACCEPT4_CODE);
}
}
struct connect_info {
__u64 fd;
__u64* sockaddr;
__u32 addrlen;
};
@@ -138,7 +131,6 @@ void sys_enter_connect(struct sys_enter_connect_ctx *ctx) {
struct connect_info info = {};
info.sockaddr = ctx->sockaddr;
info.addrlen = ctx->addrlen;
info.fd = ctx->fd;
@@ -193,19 +185,15 @@ void sys_exit_connect(struct sys_exit_connect_ctx *ctx) {
return;
}
struct fd_info fdinfo = {
.flags = FLAGS_IS_CLIENT_BIT
};
bpf_probe_read(fdinfo.ipv4_addr, sizeof(fdinfo.ipv4_addr), info.sockaddr);
conn_flags flags = FLAGS_IS_CLIENT_BIT;
__u32 pid = id >> 32;
__u32 fd = (__u32) info.fd;
__u64 key = (__u64) pid << 32 | fd;
err = bpf_map_update_elem(&file_descriptor_to_ipv4, &key, &fdinfo, BPF_ANY);
err = bpf_map_update_elem(&connection_context, &key, &flags, BPF_ANY);
if (err != 0) {
log_error(ctx, LOG_ERROR_PUTTING_FD_MAPPING, id, err, ORIGIN_SYS_EXIT_CONNECT_CODE);
log_error(ctx, LOG_ERROR_PUTTING_CONNECTION_CONTEXT, id, err, ORIGIN_SYS_EXIT_CONNECT_CODE);
}
}

View File

@@ -10,8 +10,9 @@ Copyright (C) UP9 Inc.
#include "include/log.h"
#include "include/logger_messages.h"
#include "include/pids.h"
#include "include/common.h"
struct sys_enter_read_ctx {
struct sys_enter_read_write_ctx {
__u64 __unused_syscall_header;
__u32 __unused_syscall_nr;
@@ -20,8 +21,46 @@ struct sys_enter_read_ctx {
__u64 count;
};
struct sys_exit_read_write_ctx {
__u64 __unused_syscall_header;
__u32 __unused_syscall_nr;
__u64 ret;
};
static __always_inline void fd_tracepoints_handle_openssl(struct sys_enter_read_write_ctx *ctx, __u64 id, struct ssl_info *infoPtr, struct bpf_map_def *map_fd, __u64 origin_code) {
struct ssl_info info;
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, origin_code);
return;
}
info.fd = ctx->fd;
err = bpf_map_update_elem(map_fd, &id, &info, BPF_ANY);
if (err != 0) {
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, origin_code);
return;
}
}
static __always_inline void fd_tracepoints_handle_go(struct sys_enter_read_write_ctx *ctx, __u64 id, struct bpf_map_def *map_fd, __u64 origin_code) {
__u32 fd = ctx->fd;
long err = bpf_map_update_elem(map_fd, &id, &fd, BPF_ANY);
if (err != 0) {
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, origin_code);
return;
}
}
SEC("tracepoint/syscalls/sys_enter_read")
void sys_enter_read(struct sys_enter_read_ctx *ctx) {
void sys_enter_read(struct sys_enter_read_write_ctx *ctx) {
__u64 id = bpf_get_current_pid_tgid();
if (!should_tap(id >> 32)) {
@@ -30,38 +69,15 @@ void sys_enter_read(struct sys_enter_read_ctx *ctx) {
struct ssl_info *infoPtr = bpf_map_lookup_elem(&openssl_read_context, &id);
if (infoPtr == NULL) {
return;
}
struct ssl_info info;
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, ORIGIN_SYS_ENTER_READ_CODE);
return;
}
info.fd = ctx->fd;
err = bpf_map_update_elem(&openssl_read_context, &id, &info, BPF_ANY);
if (err != 0) {
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, ORIGIN_SYS_ENTER_READ_CODE);
if (infoPtr != NULL) {
fd_tracepoints_handle_openssl(ctx, id, infoPtr, &openssl_read_context, ORIGIN_SYS_ENTER_READ_CODE);
}
fd_tracepoints_handle_go(ctx, id, &go_kernel_read_context, ORIGIN_SYS_ENTER_READ_CODE);
}
struct sys_enter_write_ctx {
__u64 __unused_syscall_header;
__u32 __unused_syscall_nr;
__u64 fd;
__u64* buf;
__u64 count;
};
SEC("tracepoint/syscalls/sys_enter_write")
void sys_enter_write(struct sys_enter_write_ctx *ctx) {
void sys_enter_write(struct sys_enter_read_write_ctx *ctx) {
__u64 id = bpf_get_current_pid_tgid();
if (!should_tap(id >> 32)) {
@@ -70,23 +86,25 @@ void sys_enter_write(struct sys_enter_write_ctx *ctx) {
struct ssl_info *infoPtr = bpf_map_lookup_elem(&openssl_write_context, &id);
if (infoPtr == NULL) {
return;
}
struct ssl_info info;
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, ORIGIN_SYS_ENTER_WRITE_CODE);
return;
}
info.fd = ctx->fd;
err = bpf_map_update_elem(&openssl_write_context, &id, &info, BPF_ANY);
if (err != 0) {
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, ORIGIN_SYS_ENTER_WRITE_CODE);
if (infoPtr != NULL) {
fd_tracepoints_handle_openssl(ctx, id, infoPtr, &openssl_write_context, ORIGIN_SYS_ENTER_WRITE_CODE);
}
fd_tracepoints_handle_go(ctx, id, &go_kernel_write_context, ORIGIN_SYS_ENTER_WRITE_CODE);
}
SEC("tracepoint/syscalls/sys_exit_read")
void sys_exit_read(struct sys_exit_read_write_ctx *ctx) {
__u64 id = bpf_get_current_pid_tgid();
// Delete from go map. The value is not used after exiting this syscall.
// Keep value in openssl map.
bpf_map_delete_elem(&go_kernel_read_context, &id);
}
SEC("tracepoint/syscalls/sys_exit_write")
void sys_exit_write(struct sys_exit_read_write_ctx *ctx) {
__u64 id = bpf_get_current_pid_tgid();
// Delete from go map. The value is not used after exiting this syscall.
// Keep value in openssl map.
bpf_map_delete_elem(&go_kernel_write_context, &id);
}

View File

@@ -35,11 +35,12 @@ using `bpf_probe_read` calls in `go_crypto_tls_get_fd_from_tcp_conn` function.
SOURCES:
Tracing Go Functions with eBPF (before 1.17): https://www.grant.pizza/blog/tracing-go-functions-with-ebpf-part-2/
Tracing Go Functions with eBPF (<=1.16): https://www.grant.pizza/blog/tracing-go-functions-with-ebpf-part-2/
Challenges of BPF Tracing Go: https://blog.0x74696d.com/posts/challenges-of-bpf-tracing-go/
x86 calling conventions: https://en.wikipedia.org/wiki/X86_calling_conventions
Plan 9 from Bell Labs: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
The issue for calling convention change in Go: https://github.com/golang/go/issues/40724
Go ABI0 (<=1.16) specification: https://go.dev/doc/asm
Proposal of Register-based Go calling convention: https://go.googlesource.com/proposal/+/master/design/40724-register-calling.md
Go internal ABI (1.17) specification: https://go.googlesource.com/go/+/refs/heads/dev.regabi/src/cmd/compile/internal-abi.md
Go internal ABI (current) specification: https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md
@@ -55,10 +56,60 @@ Capstone Engine: https://www.capstone-engine.org/
#include "include/logger_messages.h"
#include "include/pids.h"
#include "include/common.h"
#include "include/go_abi_0.h"
#include "include/go_abi_internal.h"
#include "include/go_types.h"
static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *ctx) {
// TODO: cilium/ebpf does not support .kconfig Therefore; for now, we build object files per kernel version.
// Error: reference to .kconfig: not supported
// See: https://github.com/cilium/ebpf/issues/698
// extern int LINUX_KERNEL_VERSION __kconfig;
enum ABI {
ABI0=0,
ABIInternal=1,
};
#if defined(bpf_target_x86)
// get_goid_from_thread_local_storage function is x86 specific
static __always_inline __u32 get_goid_from_thread_local_storage(__u64 *goroutine_id) {
int zero = 0;
int one = 1;
struct goid_offsets* offsets = bpf_map_lookup_elem(&goid_offsets_map, &zero);
if (offsets == NULL) {
return 0;
}
// Get the task that currently assigned to this thread.
struct task_struct *task = (struct task_struct*) bpf_get_current_task();
if (task == NULL) {
return 0;
}
// Read task->thread
struct thread_struct *thr;
bpf_probe_read(&thr, sizeof(thr), &task->thread);
// Read task->thread.fsbase
u64 fsbase;
#ifdef KERNEL_BEFORE_4_6
// TODO: if (LINUX_KERNEL_VERSION <= KERNEL_VERSION(4, 6, 0)) {
fsbase = BPF_CORE_READ((struct thread_struct___v46 *)thr, fs);
#else
fsbase = BPF_CORE_READ(thr, fsbase);
#endif
// Get the Goroutine ID (goid) which is stored in thread-local storage.
size_t g_addr;
bpf_probe_read_user(&g_addr, sizeof(void *), (void*)(fsbase + offsets->g_addr_offset));
bpf_probe_read_user(goroutine_id, sizeof(void *), (void*)(g_addr + offsets->goid_offset));
return 1;
}
#endif
static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *ctx, enum ABI abi) {
struct go_interface conn;
long err;
__u64 addr;
@@ -67,8 +118,15 @@ static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *
if (err != 0) {
return invalid_fd;
}
#else
addr = GO_ABI_INTERNAL_PT_REGS_R1(ctx);
#elif defined(bpf_target_x86)
if (abi == ABI0) {
err = bpf_probe_read(&addr, sizeof(addr), (void*)GO_ABI_INTERNAL_PT_REGS_SP(ctx)+0x8);
if (err != 0) {
return invalid_fd;
}
} else {
addr = GO_ABI_INTERNAL_PT_REGS_R1(ctx);
}
#endif
err = bpf_probe_read(&conn, sizeof(conn), (void*)addr);
@@ -91,7 +149,7 @@ static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *
return fd;
}
static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context) {
static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context, enum ABI abi) {
__u64 pid_tgid = bpf_get_current_pid_tgid();
__u64 pid = pid_tgid >> 32;
if (!should_tap(pid)) {
@@ -107,14 +165,52 @@ static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
#else
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R2(ctx);
#elif defined(bpf_target_x86)
if (abi == ABI0) {
err = bpf_probe_read(&info.buffer_len, sizeof(__u32), (void*)GO_ABI_0_PT_REGS_SP(ctx)+0x18);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
} else {
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R2(ctx);
}
#endif
info.buffer = (void*)GO_ABI_INTERNAL_PT_REGS_R4(ctx);
info.fd = go_crypto_tls_get_fd_from_tcp_conn(ctx);
// GO_ABI_INTERNAL_PT_REGS_GP is Goroutine address
__u64 pid_fp = pid << 32 | GO_ABI_INTERNAL_PT_REGS_GP(ctx);
#if defined(bpf_target_x86)
if (abi == ABI0) {
err = bpf_probe_read(&info.buffer, sizeof(__u32), (void*)GO_ABI_0_PT_REGS_SP(ctx)+0x11);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FROM_SSL_BUFFER, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
// We basically add 00 suffix to the hex address.
info.buffer = (void*)((long)info.buffer << 8);
} else {
#endif
info.buffer = (void*)GO_ABI_INTERNAL_PT_REGS_R4(ctx);
#if defined(bpf_target_x86)
}
#endif
info.fd = go_crypto_tls_get_fd_from_tcp_conn(ctx, abi);
__u64 goroutine_id;
if (abi == ABI0) {
#if defined(bpf_target_arm64)
// In case of ABI0 and arm64, it's stored in the Goroutine register
goroutine_id = GO_ABI_0_PT_REGS_GP(ctx);
#elif defined(bpf_target_x86)
// In case of ABI0 and amd64, it's stored in the thread-local storage
int status = get_goid_from_thread_local_storage(&goroutine_id);
if (!status) {
return;
}
#endif
} else {
// GO_ABI_INTERNAL_PT_REGS_GP is the Goroutine address in ABIInternal
goroutine_id = GO_ABI_INTERNAL_PT_REGS_GP(ctx);
}
__u64 pid_fp = pid << 32 | goroutine_id;
err = bpf_map_update_elem(go_context, &pid_fp, &info, BPF_ANY);
if (err != 0) {
@@ -124,15 +220,30 @@ static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf
return;
}
static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context, __u32 flags) {
static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context, struct bpf_map_def* go_user_kernel_context, __u32 flags, enum ABI abi) {
__u64 pid_tgid = bpf_get_current_pid_tgid();
__u64 pid = pid_tgid >> 32;
if (!should_tap(pid)) {
return;
}
// GO_ABI_INTERNAL_PT_REGS_GP is Goroutine address
__u64 pid_fp = pid << 32 | GO_ABI_INTERNAL_PT_REGS_GP(ctx);
__u64 goroutine_id;
if (abi == ABI0) {
#if defined(bpf_target_arm64)
// In case of ABI0 and arm64, it's stored in the Goroutine register
goroutine_id = GO_ABI_0_PT_REGS_GP(ctx);
#elif defined(bpf_target_x86)
// In case of ABI0 and amd64, it's stored in the thread-local storage
int status = get_goid_from_thread_local_storage(&goroutine_id);
if (!status) {
return;
}
#endif
} else {
// GO_ABI_INTERNAL_PT_REGS_GP is the Goroutine address in ABIInternal
goroutine_id = GO_ABI_INTERNAL_PT_REGS_GP(ctx);
}
__u64 pid_fp = pid << 32 | goroutine_id;
struct ssl_info *info_ptr = bpf_map_lookup_elem(go_context, &pid_fp);
if (info_ptr == NULL) {
@@ -156,8 +267,17 @@ static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct
return;
}
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R7(ctx); // n in return n, nil
#else
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R1(ctx); // n in return n, nil
#elif defined(bpf_target_x86)
if (abi == ABI0) {
// n in return n, nil
err = bpf_probe_read(&info.buffer_len, sizeof(__u32), (void*)GO_ABI_0_PT_REGS_SP(ctx)+0x28);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
} else {
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R1(ctx); // n in return n, nil
}
#endif
// This check achieves ignoring 0 length reads (the reads result with an error)
if (info.buffer_len <= 0) {
@@ -165,27 +285,70 @@ static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct
}
}
__u64 key = (__u64) pid << 32 | info_ptr->fd;
struct address_info *address_info = bpf_map_lookup_elem(go_user_kernel_context, &key);
// Ideally we would delete the entry from the map after reading it,
// but sometimes the uprobe is called twice in a row without the tcp kprobes in between to fill in
// the entry again. Keeping it in the map and rely on LRU logic.
if (address_info == NULL) {
log_error(ctx, LOG_ERROR_GETTING_GO_USER_KERNEL_CONTEXT, pid_tgid, info_ptr->fd, err);
return;
}
info.address_info.daddr = address_info->daddr;
info.address_info.dport = address_info->dport;
info.address_info.saddr = address_info->saddr;
info.address_info.sport = address_info->sport;
output_ssl_chunk(ctx, &info, info.buffer_len, pid_tgid, flags);
return;
}
SEC("uprobe/go_crypto_tls_write")
void BPF_KPROBE(go_crypto_tls_write) {
go_crypto_tls_uprobe(ctx, &go_write_context);
SEC("uprobe/go_crypto_tls_abi0_write")
int BPF_KPROBE(go_crypto_tls_abi0_write) {
go_crypto_tls_uprobe(ctx, &go_write_context, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_write_ex")
void BPF_KPROBE(go_crypto_tls_write_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_write_context, 0);
SEC("uprobe/go_crypto_tls_abi0_write_ex")
int BPF_KPROBE(go_crypto_tls_abi0_write_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_write_context, &go_user_kernel_write_context, 0, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_read")
void BPF_KPROBE(go_crypto_tls_read) {
go_crypto_tls_uprobe(ctx, &go_read_context);
SEC("uprobe/go_crypto_tls_abi0_read")
int BPF_KPROBE(go_crypto_tls_abi0_read) {
go_crypto_tls_uprobe(ctx, &go_read_context, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_read_ex")
void BPF_KPROBE(go_crypto_tls_read_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_read_context, FLAGS_IS_READ_BIT);
SEC("uprobe/go_crypto_tls_abi0_read_ex")
int BPF_KPROBE(go_crypto_tls_abi0_read_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_read_context, &go_user_kernel_read_context, FLAGS_IS_READ_BIT, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_write")
int BPF_KPROBE(go_crypto_tls_abi_internal_write) {
go_crypto_tls_uprobe(ctx, &go_write_context, ABIInternal);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_write_ex")
int BPF_KPROBE(go_crypto_tls_abi_internal_write_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_write_context, &go_user_kernel_write_context, 0, ABIInternal);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_read")
int BPF_KPROBE(go_crypto_tls_abi_internal_read) {
go_crypto_tls_uprobe(ctx, &go_read_context, ABIInternal);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_read_ex")
int BPF_KPROBE(go_crypto_tls_abi_internal_read_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_read_context, &go_user_kernel_read_context, FLAGS_IS_READ_BIT, ABIInternal);
return 1;
}

View File

@@ -7,9 +7,11 @@ Copyright (C) UP9 Inc.
#ifndef __COMMON__
#define __COMMON__
#define AF_INET 2 /* Internet IP Protocol */
const __s32 invalid_fd = -1;
static int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd);
static int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd, struct ssl_info* info);
static void send_chunk_part(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tls_chunk* chunk, int start, int end);
static void send_chunk(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tls_chunk* chunk);
static void output_ssl_chunk(struct pt_regs *ctx, struct ssl_info* info, int count_bytes, __u64 id, __u32 flags);

View File

@@ -0,0 +1,52 @@
/*
Note: This file is licenced differently from the rest of the project
SPDX-License-Identifier: GPL-2.0
Copyright (C) UP9 Inc.
*/
#ifndef __GO_ABI_0__
#define __GO_ABI_0__
/*
Go ABI0 (<=1.16) specification
https://go.dev/doc/asm
Since ABI0 is a stack-based calling convention we only need the stack pointer and
if it's applicable the Goroutine pointer
*/
#include "target_arch.h"
#if defined(bpf_target_x86)
#ifdef __i386__
#define GO_ABI_0_PT_REGS_SP(x) ((x)->esp)
#else
#define GO_ABI_0_PT_REGS_SP(x) ((x)->sp)
#endif
#elif defined(bpf_target_arm)
#define GO_ABI_0_PT_REGS_SP(x) ((x)->uregs[13])
#define GO_ABI_0_PT_REGS_GP(x) ((x)->uregs[10])
#elif defined(bpf_target_arm64)
/* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */
struct pt_regs;
#define PT_REGS_ARM64 const volatile struct user_pt_regs
#define GO_ABI_0_PT_REGS_SP(x) (((PT_REGS_ARM64 *)(x))->sp)
#define GO_ABI_0_PT_REGS_GP(x) (((PT_REGS_ARM64 *)(x))->regs[18])
#elif defined(bpf_target_powerpc)
#define GO_ABI_0_PT_REGS_SP(x) ((x)->sp)
#define GO_ABI_0_PT_REGS_GP(x) ((x)->gpr[30])
#endif
#endif /* __GO_ABI_0__ */

View File

@@ -8,54 +8,11 @@ Copyright (C) UP9 Inc.
#define __GO_ABI_INTERNAL__
/*
Go internal ABI specification
Go internal ABI (1.17/current) specification
https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md
*/
/* Scan the ARCH passed in from ARCH env variable */
#if defined(__TARGET_ARCH_x86)
#define bpf_target_x86
#define bpf_target_defined
#elif defined(__TARGET_ARCH_s390)
#define bpf_target_s390
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm)
#define bpf_target_arm
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm64)
#define bpf_target_arm64
#define bpf_target_defined
#elif defined(__TARGET_ARCH_mips)
#define bpf_target_mips
#define bpf_target_defined
#elif defined(__TARGET_ARCH_powerpc)
#define bpf_target_powerpc
#define bpf_target_defined
#elif defined(__TARGET_ARCH_sparc)
#define bpf_target_sparc
#define bpf_target_defined
#else
#undef bpf_target_defined
#endif
/* Fall back to what the compiler says */
#ifndef bpf_target_defined
#if defined(__x86_64__)
#define bpf_target_x86
#elif defined(__s390__)
#define bpf_target_s390
#elif defined(__arm__)
#define bpf_target_arm
#elif defined(__aarch64__)
#define bpf_target_arm64
#elif defined(__mips__)
#define bpf_target_mips
#elif defined(__powerpc__)
#define bpf_target_powerpc
#elif defined(__sparc__)
#define bpf_target_sparc
#endif
#endif
#include "target_arch.h"
#if defined(bpf_target_x86)
@@ -78,15 +35,15 @@ https://github.com/golang/go/blob/go1.17.6/src/cmd/compile/internal/ssa/gen/AMD6
#else
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->rax)
#define GO_ABI_INTERNAL_PT_REGS_R2(x) ((x)->rcx)
#define GO_ABI_INTERNAL_PT_REGS_R3(x) ((x)->rdx)
#define GO_ABI_INTERNAL_PT_REGS_R4(x) ((x)->rbx)
#define GO_ABI_INTERNAL_PT_REGS_R5(x) ((x)->rbp)
#define GO_ABI_INTERNAL_PT_REGS_R6(x) ((x)->rsi)
#define GO_ABI_INTERNAL_PT_REGS_R7(x) ((x)->rdi)
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->rsp)
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->rbp)
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->ax)
#define GO_ABI_INTERNAL_PT_REGS_R2(x) ((x)->cx)
#define GO_ABI_INTERNAL_PT_REGS_R3(x) ((x)->dx)
#define GO_ABI_INTERNAL_PT_REGS_R4(x) ((x)->bx)
#define GO_ABI_INTERNAL_PT_REGS_R5(x) ((x)->bp)
#define GO_ABI_INTERNAL_PT_REGS_R6(x) ((x)->si)
#define GO_ABI_INTERNAL_PT_REGS_R7(x) ((x)->di)
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->sp)
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->bp)
#define GO_ABI_INTERNAL_PT_REGS_GP(x) ((x)->r14)
#endif

View File

@@ -8,9 +8,16 @@ Copyright (C) UP9 Inc.
#define __HEADERS__
#include <stddef.h>
#include <linux/bpf.h>
#include <linux/ptrace.h>
#include "target_arch.h"
#include "vmlinux_x86.h"
#include "vmlinux_arm64.h"
#include "legacy_kernel.h"
#include <bpf/bpf_endian.h>
#include <bpf/bpf_helpers.h>
#include "bpf/bpf_tracing.h"
#include <bpf/bpf_tracing.h>
#include <bpf/bpf_core_read.h>
#endif /* __HEADERS__ */

View File

@@ -0,0 +1,50 @@
#ifndef __LEGACY_KERNEL_H__
#define __LEGACY_KERNEL_H__
#if defined(bpf_target_x86)
struct thread_struct___v46 {
struct desc_struct tls_array[3];
unsigned long sp0;
unsigned long sp;
unsigned short es;
unsigned short ds;
unsigned short fsindex;
unsigned short gsindex;
unsigned long fs;
unsigned long gs;
struct perf_event ptrace_bps[4];
unsigned long debugreg6;
unsigned long ptrace_dr7;
unsigned long cr2;
unsigned long trap_nr;
unsigned long error_code;
unsigned long io_bitmap_ptr;
unsigned long iopl;
unsigned io_bitmap_max;
long: 63;
long: 64;
long: 64;
long: 64;
long: 64;
long: 64;
struct fpu fpu;
};
#elif defined(bpf_target_arm)
// Commented out since thread_struct is not used in ARM64 yet.
// struct thread_struct___v46 {
// struct cpu_context cpu_context;
// long: 64;
// unsigned long tp_value;
// struct fpsimd_state fpsimd_state;
// unsigned long fault_address;
// unsigned long fault_code;
// struct debug_info debug;
// };
#endif
#endif /* __LEGACY_KERNEL_H__ */

View File

@@ -10,22 +10,28 @@ Copyright (C) UP9 Inc.
// Must be synced with bpf_logger_messages.go
//
#define LOG_ERROR_READING_BYTES_COUNT (0)
#define LOG_ERROR_READING_FD_ADDRESS (1)
#define LOG_ERROR_READING_FROM_SSL_BUFFER (2)
#define LOG_ERROR_BUFFER_TOO_BIG (3)
#define LOG_ERROR_ALLOCATING_CHUNK (4)
#define LOG_ERROR_READING_SSL_CONTEXT (5)
#define LOG_ERROR_PUTTING_SSL_CONTEXT (6)
#define LOG_ERROR_GETTING_SSL_CONTEXT (7)
#define LOG_ERROR_MISSING_FILE_DESCRIPTOR (8)
#define LOG_ERROR_PUTTING_FILE_DESCRIPTOR (9)
#define LOG_ERROR_PUTTING_ACCEPT_INFO (10)
#define LOG_ERROR_GETTING_ACCEPT_INFO (11)
#define LOG_ERROR_READING_ACCEPT_INFO (12)
#define LOG_ERROR_PUTTING_FD_MAPPING (13)
#define LOG_ERROR_PUTTING_CONNECT_INFO (14)
#define LOG_ERROR_GETTING_CONNECT_INFO (15)
#define LOG_ERROR_READING_CONNECT_INFO (16)
#define LOG_ERROR_READING_FROM_SSL_BUFFER (1)
#define LOG_ERROR_BUFFER_TOO_BIG (2)
#define LOG_ERROR_ALLOCATING_CHUNK (3)
#define LOG_ERROR_READING_SSL_CONTEXT (4)
#define LOG_ERROR_PUTTING_SSL_CONTEXT (5)
#define LOG_ERROR_GETTING_SSL_CONTEXT (6)
#define LOG_ERROR_MISSING_FILE_DESCRIPTOR (7)
#define LOG_ERROR_PUTTING_FILE_DESCRIPTOR (8)
#define LOG_ERROR_PUTTING_ACCEPT_INFO (9)
#define LOG_ERROR_GETTING_ACCEPT_INFO (10)
#define LOG_ERROR_READING_ACCEPT_INFO (11)
#define LOG_ERROR_PUTTING_CONNECTION_CONTEXT (12)
#define LOG_ERROR_PUTTING_CONNECT_INFO (13)
#define LOG_ERROR_GETTING_CONNECT_INFO (14)
#define LOG_ERROR_READING_CONNECT_INFO (15)
#define LOG_ERROR_READING_SOCKET_FAMILY (16)
#define LOG_ERROR_READING_SOCKET_DADDR (17)
#define LOG_ERROR_READING_SOCKET_SADDR (18)
#define LOG_ERROR_READING_SOCKET_DPORT (19)
#define LOG_ERROR_READING_SOCKET_SPORT (20)
#define LOG_ERROR_PUTTING_GO_USER_KERNEL_CONTEXT (21)
#define LOG_ERROR_GETTING_GO_USER_KERNEL_CONTEXT (22)
// Sometimes we have the same error, happening from different locations.
// in order to be able to distinct between them in the log, we add an

Some files were not shown because too many files have changed in this diff Show More