Compare commits

...

36 Commits

Author SHA1 Message Date
leon-up9
d11770681b full height (#1202)
Co-authored-by: Leon <>
2022-07-13 18:37:22 +03:00
gadotroee
e9719cba3a Add time range to stats (#1199) 2022-07-13 17:21:18 +03:00
leon-up9
15f7b889e2 height change (#1201)
Co-authored-by: Leon <>
2022-07-13 13:37:08 +03:00
RoyUP9
d98ac0e8f7 Removed redundant IgnoredUserAgents field (#1198) 2022-07-12 20:41:42 +03:00
gadotroee
a3c236ff0a Fix colors map initialization (#1200) 2022-07-12 20:05:21 +03:00
gadotroee
4b280ecd6d Hide Response tab if there is no response (#1197) 2022-07-12 18:38:39 +03:00
leon-up9
de554f5fb6 ui/ include scss files in common (#1195)
* include scss files

* exported color

Co-authored-by: Leon <>
2022-07-12 11:50:24 +03:00
RoyUP9
7c159fffc0 Added redact using insertion filter (#1196) 2022-07-12 10:19:24 +03:00
M. Mert Yıldıran
1f2f63d11b Implement AMQP request-response matcher (#1091)
* Implement the basis of AMQP request-response matching

* Fix `package.json`

* Add `ExchangeDeclareOk`

* Add `ConnectionCloseOk`

* Add `BasicConsumeOk`

* Add `QueueBindOk`

* Add `representEmptyResponse` and fix `BasicPublish` and `BasicDeliver`

* Fix ident and matcher, add `connectionOpen`, `channelOpen`, `connectionTune`, `basicCancel`

* Fix linter

* Fix the unit tests

* #run_acceptance_tests

* #run_acceptance_tests

* Fix the tests #run_acceptance_tests

* Log don't panic

* Don't skip AMQP acceptance tests #run_acceptance_tests

* Revert "Don't skip AMQP acceptance tests #run_acceptance_tests"

This reverts commit c60e9cf747.

* Remove `Details` section from `representEmpty`

* Add `This request or response has no data.` text
2022-07-11 17:33:25 +03:00
RoyUP9
e2544aea12 Remove duplication of Headers, Cookies and QueryString (#1192) 2022-07-11 13:16:22 +03:00
Nimrod Gilboa Markevich
57e60073f5 Generate bpf files before running tests (#1194) 2022-07-11 12:31:45 +03:00
Nimrod Gilboa Markevich
f220ad2f1a Delete ebpf object files (#1190)
Do not track object files in git.
Generate the files with `make bpf` or during `make agent`.
2022-07-11 12:08:20 +03:00
RoyUP9
5875ba0eb3 Fixed panic in socket cleanup (#1193) 2022-07-11 11:18:58 +03:00
leon-up9
9aaf3f1423 Ui/Download request replay (#1188)
* added icon

* download & upload

* button changes

* clean up

* changes

* pkj json

* img

* removed codeEditor options

* changes

Co-authored-by: Leon <>
2022-07-10 16:48:18 +03:00
Nimrod Gilboa Markevich
a2463b739a Improve tls info for openssl with kprobes (#1177)
Instead of going through the socket fd, addresses are obtained in kprobe/tcp_sendmsg on ssl write and kprobe/tcp_recvmsg on ssl read. The tcp kprobes and the openssl uprobes communicate through the id->sslInfo bpf map.
2022-07-07 19:11:54 +03:00
AmitUp9
c010d336bb add date to timeline ticks (#1191) 2022-07-07 14:09:41 +03:00
RoyUP9
710411e112 Replaced ProtocolId with Protocol Summary (#1189) 2022-07-07 12:05:59 +03:00
AmitUp9
274fbeb34a warning cleaning from console (#1187)
* warning cleaning from console

* code cleaning
2022-07-06 13:13:04 +03:00
leon-up9
38c05a6634 UI/feature flag for replay modal (#1186)
* context added

* import added

* chnages

* ui enabled

* moved to Consts

* changes to recoil

* change

* new useEffect

Co-authored-by: Leon <>
2022-07-06 11:19:46 +03:00
leon-up9
d857935889 move icon to right side (#1185)
Co-authored-by: Leon <>
2022-07-05 16:35:03 +03:00
gadotroee
ec11b21b51 Add available protocols to the stats endpoint (colors for methods are coming from server) (#1184)
* add protocols array to the endpoint

* no message

* no message

* fix tests and small fix for the iteration

* fix the color of the protocol

* Get protocols list and method colors from server

* fix tests

* cr fixes

Co-authored-by: Amit Fainholts <amit@up9.com>
2022-07-05 15:30:39 +03:00
M. Mert Yıldıran
52c9251c00 Add ABI0 support to Go crypto/tls eBPF tracer (#1169)
* Determine the Go ABI and get `goid` offset from DWARF

* Add `ABI` enum and morph the function according to the detected ABI

* Pass `goid` offset to an eBPF map to retrieve it in eBPF context

* Add `vmlinux.h` and implement `get_goid_from_thread_local_storage`

* Fix BPF verifier errors

* Update the comments

* Add `go_abi_0.h` and implement `ABI0` specific reads for `arm64`

* Upgrade `github.com/cilium/ebpf` to `v0.9.0`

* Add a comment

* Add macros for x86 specific parts

* Update `x86.o`

* Fix the map key type

* Add `user_pt_regs`

* Update arm64 object file

* Fix the version detection logic

* Add `getGStructOffset` method

* Define `goid_offsets`, `goid_offsets_map` structs and pass the offsets correctly

* Fix the `net.TCPConn` and buffer addresses for `ABI0`

* Remove comment

* Fix the issues for arm64 build

* Update x86.o

* Revert "Fix the issues for arm64 build"

This reverts commit 48b041b1b6.

* Revert `user_pt_regs`

* Add `vmlinux` directory

* Fix the `build.sh` and `Dockerfile`

* Add vmlinux_arm64.h

* Disable `get_goid_from_thread_local_storage` on ARM64 with a macro

* Update x86.o

* Update arm64.o

* x86

* arm64

* Fix the cross-compilation issue from x86 to arm64

* Fix the same thing for x86

* Use `BPF_CORE_READ` macro instead of `bpf_ringbuf_reserve` to support kernel versions older than 5.8

Also;
Add legacy version of thread_struct: thread_struct___v46
Build an additional object file for the kernel versions older than or equal to 4.6 and load them accordingly.
Add github.com/moby/moby

* Make #define directives more definitive

* Select the x86 and arm64 versions of `vmlinux.h` using macros

* Put `goid` offsets into the map before installing `uprobe`(s)

* arm64

* #run_acceptance_tests

* Remove a forgotten `fmt.Printf`

* Log the detected Linux kernel version
2022-07-05 14:35:30 +03:00
leon-up9
f3a6b3a9d4 with loading HOC (#1181)
* withLoading

* optional props

* LoadingWrapper

* pr comments

* changes

Co-authored-by: Leon <>
2022-07-05 12:23:47 +03:00
leon-up9
5f73c2d50a allow skipping formating error (#1183)
Co-authored-by: Leon <>
2022-07-04 16:24:23 +03:00
gadotroee
d6944d467c Merge traffic stats endpoints to one and add auto interval logic (#1174) 2022-07-04 10:01:49 +03:00
leon-up9
57078517a4 Ui/replay demo notes (#1180)
* close ws on open

* chech if json before parsing

* setting defualt tab reponse and missing dep

* remove redundant

* space

* PR fixes

* remove redundant

* changed order

* Revert "remove redundant"

This reverts commit 2f0bef5d33.

* revert order change

* changes

* change

* changes

Co-authored-by: Leon <>
2022-07-03 15:36:16 +03:00
AmitUp9
b4bc09637c fix select in traffic stats ui (#1179) 2022-07-03 11:48:38 +03:00
lirazyehezkel
302333b4ae TRA-4622 Remove rules feature UI (#1178)
* Removed policy rules (validation rules) feature

* updated test pcap

* Remove rules

* fix replay in rules

Co-authored-by: Roy Island <roy@up9.com>
Co-authored-by: RoyUP9 <87927115+RoyUP9@users.noreply.github.com>
Co-authored-by: Roee Gadot <roee.gadot@up9.com>
2022-07-03 11:32:23 +03:00
leon-up9
13ed8eb58a Ui/TRA-4607 - replay mizu requests (#1165)
* modal & keyValueTable added

* added pulse animation
KeyValueTable Behavior improved
CodeEditor addded

* style changed

* codeEditor styling
support query params

* send request data

* finally stop loading

* select width

* methods and requesr format

* icon changed & moved near request tab

* accordions added and response presented

* 2 way params biding

* remove redundant

* host path fixed

* fix path input

* icon styles

* fallback for format body

* refresh button

* changes

* remove redundant

* closing tag

* capitilized character

* PR comments

* removed props

* small changes

* color added to reponse data

Co-authored-by: Leon <>
2022-07-03 09:48:56 +03:00
David Levanon
48619b3e1c fix small regression with openssl (#1176) 2022-06-30 17:39:03 +03:00
AmitUp9
3b0b311e1e TRA_4623 - methods view on timeline stats with coordination to the pie stats (#1175)
* Add select protocol → when selected, the view will be on commands of that exact protocol

* CR fixes

* added const instead of free string

* remove redundant sass file
2022-06-30 13:35:08 +03:00
gadotroee
3a9236a381 Fix replay to be like the test (#1173)
Represent should get entry that Marshaled and UnMarshaled
2022-06-29 11:54:16 +03:00
AmitUp9
2e7fd34210 Move general functions from timeline stats to helpers utils (#1170)
* no message

* format document
2022-06-29 11:15:22 +03:00
gadotroee
01af6aa19c Add reply endpoint for http (#1168) 2022-06-28 18:39:23 +03:00
David Levanon
2bfae1baae allow to configure max live streams from mizu cli (#1172)
* allow to configure max live streams from mizu cli

* Update cli/cmd/tap.go

Co-authored-by: Nimrod Gilboa Markevich <59927337+nimrod-up9@users.noreply.github.com>

Co-authored-by: Nimrod Gilboa Markevich <59927337+nimrod-up9@users.noreply.github.com>
2022-06-28 14:41:47 +03:00
RoyUP9
2df9fb49db Changed entry protocol field from struct to unique string (#1167) 2022-06-26 17:08:09 +03:00
163 changed files with 345907 additions and 12287 deletions

View File

@@ -32,6 +32,10 @@ jobs:
id: agent_modified_files
run: devops/check_modified_files.sh agent/
- name: Generate eBPF object files and go bindings
id: generate_ebpf
run: make bpf
- name: Go lint - agent
uses: golangci/golangci-lint-action@v2
if: steps.agent_modified_files.outputs.matched == 'true'

View File

@@ -40,6 +40,10 @@ jobs:
run: |
./devops/install-capstone.sh
- name: Generate eBPF object files and go bindings
id: generate_ebpf
run: make bpf
- name: Check CLI modified files
id: cli_modified_files
run: devops/check_modified_files.sh cli/

3
.gitignore vendored
View File

@@ -56,3 +56,6 @@ tap/extensions/*/expect
# Ignore *.log files
*.log
# Object files
*.o

View File

@@ -8,7 +8,7 @@ SHELL=/bin/bash
# HELP
# This will output the help for each task
# thanks to https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
.PHONY: help ui agent agent-debug cli tap docker
.PHONY: help ui agent agent-debug cli tap docker bpf clean-bpf
help: ## This help.
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@@ -20,6 +20,13 @@ TS_SUFFIX="$(shell date '+%s')"
GIT_BRANCH="$(shell git branch | grep \* | cut -d ' ' -f2 | tr '[:upper:]' '[:lower:]' | tr '/' '_')"
BUCKET_PATH=static.up9.io/mizu/$(GIT_BRANCH)
export VER?=0.0
ARCH=$(shell uname -m)
ifeq ($(ARCH),$(filter $(ARCH),aarch64 arm64))
BPF_O_ARCH_LABEL=arm64
else
BPF_O_ARCH_LABEL=x86
endif
BPF_O_FILES = tap/tlstapper/tlstapper46_bpfel_$(BPF_O_ARCH_LABEL).o tap/tlstapper/tlstapper_bpfel_$(BPF_O_ARCH_LABEL).o
ui: ## Build UI.
@(cd ui; npm i ; npm run build; )
@@ -31,11 +38,17 @@ cli: ## Build CLI.
cli-debug: ## Build CLI.
@echo "building cli"; cd cli && $(MAKE) build-debug
agent: ## Build agent.
agent: bpf ## Build agent.
@(echo "building mizu agent .." )
@(cd agent; go build -o build/mizuagent main.go)
@ls -l agent/build
bpf: $(BPF_O_FILES)
$(BPF_O_FILES): $(wildcard tap/tlstapper/bpf/**/*.[ch])
@(echo "building tlstapper bpf")
@(./tap/tlstapper/bpf-builder/build.sh)
agent-debug: ## Build agent for debug.
@(echo "building mizu agent for debug.." )
@(cd agent; go build -gcflags="all=-N -l" -o build/mizuagent main.go)
@@ -76,6 +89,9 @@ clean-cli: ## Clean CLI.
clean-docker: ## Run clean docker
@(echo "DOCKER cleanup - NOT IMPLEMENTED YET " )
clean-bpf:
@(rm $(BPF_O_FILES) ; echo "bpf cleanup done" )
test-lint: ## Run lint on all modules
cd agent && golangci-lint run
cd shared && golangci-lint run

View File

@@ -11,7 +11,6 @@ module.exports = defineConfig({
testUrl: 'http://localhost:8899/',
redactHeaderContent: 'User-Header[REDACTED]',
redactBodyContent: '{ "User": "[REDACTED]" }',
regexMaskingBodyContent: '[REDACTED]',
greenFilterColor: 'rgb(210, 250, 210)',
redFilterColor: 'rgb(250, 214, 220)',
bodyJsonClass: '.hljs',

View File

@@ -1,7 +0,0 @@
import {isValueExistsInElement} from "../testHelpers/TrafficHelper";
it('Loading Mizu', function () {
cy.visit(Cypress.env('testUrl'));
});
isValueExistsInElement(true, Cypress.env('regexMaskingBodyContent'), Cypress.env('bodyJsonClass'));

View File

@@ -2,10 +2,8 @@ package acceptanceTests
import (
"archive/zip"
"bytes"
"fmt"
"io/ioutil"
"net/http"
"os/exec"
"path"
"strings"
@@ -343,7 +341,7 @@ func TestTapRedact(t *testing.T) {
tapNamespace := GetDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--redact")
tapCmdArgs = append(tapCmdArgs, "--redact", "--set", "tap.redact-patterns.request-headers=User-Header", "--set", "tap.redact-patterns.request-body=User")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
@@ -429,60 +427,6 @@ func TestTapNoRedact(t *testing.T) {
RunCypressTests(t, "npx cypress run --spec \"cypress/e2e/tests/NoRedact.js\"")
}
func TestTapRegexMasking(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := GetCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := GetDefaultTapCommandArgs()
tapNamespace := GetDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--redact")
tapCmdArgs = append(tapCmdArgs, "-r", "Mizu")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := CleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := GetApiServerUrl(DefaultApiServerPort)
if err := WaitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := GetProxyUrl(DefaultNamespaceName, DefaultServiceName)
for i := 0; i < DefaultEntriesCount; i++ {
response, requestErr := http.Post(fmt.Sprintf("%v/post", proxyUrl), "text/plain", bytes.NewBufferString("Mizu"))
if _, requestErr = ExecuteHttpRequest(response, requestErr); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
RunCypressTests(t, "npx cypress run --spec \"cypress/e2e/tests/RegexMasking.js\"")
}
func TestTapIgnoredUserAgents(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")

View File

@@ -30,7 +30,6 @@ require (
github.com/up9inc/mizu/tap/extensions/kafka v0.0.0
github.com/up9inc/mizu/tap/extensions/redis v0.0.0
github.com/wI2L/jsondiff v0.1.1
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0
k8s.io/api v0.23.3
k8s.io/apimachinery v0.23.3
k8s.io/client-go v0.23.3
@@ -52,7 +51,7 @@ require (
github.com/beevik/etree v1.1.0 // indirect
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 // indirect
github.com/chanced/dynamic v0.0.0-20211210164248-f8fadb1d735b // indirect
github.com/cilium/ebpf v0.8.1 // indirect
github.com/cilium/ebpf v0.9.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
@@ -90,6 +89,7 @@ require (
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/mertyildiran/gqlparser/v2 v2.4.6 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/moby/moby v20.10.17+incompatible // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
@@ -105,6 +105,7 @@ require (
github.com/santhosh-tekuri/jsonschema/v5 v5.0.0 // indirect
github.com/segmentio/kafka-go v0.4.27 // indirect
github.com/shirou/gopsutil v3.21.11+incompatible // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/spf13/cobra v1.3.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/struCoder/pidusage v0.2.1 // indirect

View File

@@ -128,8 +128,8 @@ github.com/chanced/openapi v0.0.8/go.mod h1:SxE2VMLPw+T7Vq8nwbVVhDF2PigvRF4n5Xyq
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.8.1 h1:bLSSEbBLqGPXxls55pGr5qWZaTqcmfDJHhou7t254ao=
github.com/cilium/ebpf v0.8.1/go.mod h1:f5zLIM0FSNuAkSyLAN7X+Hy6yznlF1mNiWUMfxMtrgk=
github.com/cilium/ebpf v0.9.0 h1:ldiV+FscPCQ/p3mNEV4O02EPbUZJFsoEtHvIr9xLTvk=
github.com/cilium/ebpf v0.9.0/go.mod h1:+OhNOIXx/Fnu1IE8bJz2dzOA+VSfyTfdNUVdlQnxUFY=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
@@ -517,6 +517,8 @@ github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:F
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/moby v20.10.17+incompatible h1:TJJfyk2fLEgK+RzqVpFNkDkm0oEi+MLUfwt9lEYnp5g=
github.com/moby/moby v20.10.17+incompatible/go.mod h1:fDXVQ6+S340veQPv35CzDahGBmHsiclFwfEygB/TWMc=
github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
@@ -629,6 +631,8 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeV
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
@@ -707,8 +711,6 @@ github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6Ut
github.com/xlab/treeprint v1.1.0 h1:G/1DjNkPpfZCFt9CSh6b5/nY4VimlbHF3Rh4obvtzDk=
github.com/xlab/treeprint v1.1.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0 h1:6fRhSjgLCkTD3JnJxvaJ4Sj+TYblw757bqYgZaOq5ZY=
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmvncFJFHJ7Gvn9wZArjbV5/FppcK2fKk/tI=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
@@ -1251,8 +1253,9 @@ gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
gotest.tools/v3 v3.3.0 h1:MfDY1b1/0xN1CyMlQDac0ziEy9zJQd9CXBRRDHw2jJo=
gotest.tools/v3 v3.3.0/go.mod h1:Mcr9QNxkg0uMvy/YElmo4SpXgJKWgQvYrT7Kw5RzJ1A=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@@ -131,6 +131,7 @@ func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) *gin.Engin
routes.MetadataRoutes(ginApp)
routes.StatusRoutes(ginApp)
routes.DbRoutes(ginApp)
routes.ReplayRoutes(ginApp)
return ginApp
}
@@ -155,7 +156,7 @@ func runInTapperMode() {
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
tapOpts := &tap.TapOpts{
HostMode: hostMode,
HostMode: hostMode,
}
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)

View File

@@ -22,7 +22,16 @@ func (e *DefaultEntryStreamerSocketConnector) SendEntry(socketId int, entry *tap
if params.EnableFullEntries {
message, _ = models.CreateFullEntryWebSocketMessage(entry)
} else {
extension := extensionsMap[entry.Protocol.Name]
protocol, ok := protocolsMap[entry.Protocol.ToString()]
if !ok {
return fmt.Errorf("protocol not found, protocol: %v", protocol)
}
extension, ok := extensionsMap[protocol.Name]
if !ok {
return fmt.Errorf("extension not found, extension: %v", protocol.Name)
}
base := extension.Dissector.Summarize(entry)
message, _ = models.CreateBaseEntryWebSocketMessage(base)
}

View File

@@ -12,7 +12,6 @@ import (
"time"
"github.com/up9inc/mizu/agent/pkg/dependency"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/oas"
"github.com/up9inc/mizu/agent/pkg/servicemap"
@@ -101,20 +100,13 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
for item := range outputItems {
extension := extensionsMap[item.Protocol.Name]
resolvedSource, resolvedDestionation, namespace := resolveIP(item.ConnectionInfo)
resolvedSource, resolvedDestination, namespace := resolveIP(item.ConnectionInfo)
if namespace == "" && item.Namespace != tapApi.UnknownNamespace {
namespace = item.Namespace
}
mizuEntry := extension.Dissector.Analyze(item, resolvedSource, resolvedDestionation, namespace)
if extension.Protocol.Name == "http" {
harEntry, err := har.NewEntry(mizuEntry.Request, mizuEntry.Response, mizuEntry.StartTime, mizuEntry.ElapsedTime)
if err == nil {
rules, _, _ := models.RunValidationRulesState(*harEntry, mizuEntry.Destination.Name)
mizuEntry.Rules = rules
}
}
mizuEntry := extension.Dissector.Analyze(item, resolvedSource, resolvedDestination, namespace)
data, err := json.Marshal(mizuEntry)
if err != nil {

View File

@@ -14,10 +14,14 @@ import (
tapApi "github.com/up9inc/mizu/tap/api"
)
var extensionsMap map[string]*tapApi.Extension // global
var (
extensionsMap map[string]*tapApi.Extension // global
protocolsMap map[string]*tapApi.Protocol //global
)
func InitExtensionsMap(ref map[string]*tapApi.Extension) {
extensionsMap = ref
func InitMaps(extensions map[string]*tapApi.Extension, protocols map[string]*tapApi.Protocol) {
extensionsMap = extensions
protocolsMap = protocols
}
type EventHandlers interface {
@@ -93,7 +97,9 @@ func websocketHandler(c *gin.Context, eventHandlers EventHandlers, isTapper bool
websocketIdsLock.Unlock()
defer func() {
socketCleanup(socketId, connectedWebsockets[socketId])
if socketConnection := connectedWebsockets[socketId]; socketConnection != nil {
socketCleanup(socketId, socketConnection)
}
}()
eventHandlers.WebSocketConnect(c, socketId, isTapper)

View File

@@ -9,10 +9,11 @@ import (
"github.com/op/go-logging"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/api"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/tap/dbgctl"
tapApi "github.com/up9inc/mizu/tap/api"
"github.com/up9inc/mizu/tap/dbgctl"
amqpExt "github.com/up9inc/mizu/tap/extensions/amqp"
httpExt "github.com/up9inc/mizu/tap/extensions/http"
kafkaExt "github.com/up9inc/mizu/tap/extensions/kafka"
@@ -22,11 +23,13 @@ import (
var (
Extensions []*tapApi.Extension // global
ExtensionsMap map[string]*tapApi.Extension // global
ProtocolsMap map[string]*tapApi.Protocol //global
)
func LoadExtensions() {
Extensions = make([]*tapApi.Extension, 0)
ExtensionsMap = make(map[string]*tapApi.Extension)
ProtocolsMap = make(map[string]*tapApi.Protocol)
extensionHttp := &tapApi.Extension{}
dissectorHttp := httpExt.NewDissector()
@@ -34,6 +37,10 @@ func LoadExtensions() {
extensionHttp.Dissector = dissectorHttp
Extensions = append(Extensions, extensionHttp)
ExtensionsMap[extensionHttp.Protocol.Name] = extensionHttp
protocolsHttp := dissectorHttp.GetProtocols()
for k, v := range protocolsHttp {
ProtocolsMap[k] = v
}
if !dbgctl.MizuTapperDisableNonHttpExtensions {
extensionAmqp := &tapApi.Extension{}
@@ -42,6 +49,10 @@ func LoadExtensions() {
extensionAmqp.Dissector = dissectorAmqp
Extensions = append(Extensions, extensionAmqp)
ExtensionsMap[extensionAmqp.Protocol.Name] = extensionAmqp
protocolsAmqp := dissectorAmqp.GetProtocols()
for k, v := range protocolsAmqp {
ProtocolsMap[k] = v
}
extensionKafka := &tapApi.Extension{}
dissectorKafka := kafkaExt.NewDissector()
@@ -49,6 +60,10 @@ func LoadExtensions() {
extensionKafka.Dissector = dissectorKafka
Extensions = append(Extensions, extensionKafka)
ExtensionsMap[extensionKafka.Protocol.Name] = extensionKafka
protocolsKafka := dissectorKafka.GetProtocols()
for k, v := range protocolsKafka {
ProtocolsMap[k] = v
}
extensionRedis := &tapApi.Extension{}
dissectorRedis := redisExt.NewDissector()
@@ -56,13 +71,18 @@ func LoadExtensions() {
extensionRedis.Dissector = dissectorRedis
Extensions = append(Extensions, extensionRedis)
ExtensionsMap[extensionRedis.Protocol.Name] = extensionRedis
protocolsRedis := dissectorRedis.GetProtocols()
for k, v := range protocolsRedis {
ProtocolsMap[k] = v
}
}
sort.Slice(Extensions, func(i, j int) bool {
return Extensions[i].Protocol.Priority < Extensions[j].Protocol.Priority
})
api.InitExtensionsMap(ExtensionsMap)
api.InitMaps(ExtensionsMap, ProtocolsMap)
providers.InitProtocolToColor(ProtocolsMap)
}
func ConfigureBasenineServer(host string, port string, dbSize int64, logLevel logging.Level, insertionFilter string) {

View File

@@ -0,0 +1,34 @@
package controllers
import (
"net/http"
"time"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/replay"
"github.com/up9inc/mizu/agent/pkg/validation"
"github.com/up9inc/mizu/logger"
)
const (
replayTimeout = 10 * time.Second
)
func ReplayRequest(c *gin.Context) {
logger.Log.Debug("Starting replay")
replayDetails := &replay.Details{}
if err := c.Bind(replayDetails); err != nil {
c.JSON(http.StatusBadRequest, err)
return
}
logger.Log.Debugf("Validating replay, %v", replayDetails)
if err := validation.Validate(replayDetails); err != nil {
c.JSON(http.StatusBadRequest, err)
return
}
logger.Log.Debug("Executing replay, %v", replayDetails)
result := replay.ExecuteRequest(replayDetails, replayTimeout)
c.JSON(http.StatusOK, result)
}

View File

@@ -36,11 +36,13 @@ var (
)
var ProtocolHttp = &tapApi.Protocol{
Name: "http",
ProtocolSummary: tapApi.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,

View File

@@ -1,7 +1,10 @@
package controllers
import (
"fmt"
"net/http"
"strconv"
"time"
core "k8s.io/api/core/v1"
@@ -79,13 +82,25 @@ func GetGeneralStats(c *gin.Context) {
c.JSON(http.StatusOK, providers.GetGeneralStats())
}
func GetAccumulativeStats(c *gin.Context) {
c.JSON(http.StatusOK, providers.GetAccumulativeStats())
func GetTrafficStats(c *gin.Context) {
startTime, endTime, err := getStartEndTime(c)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, providers.GetTrafficStats(startTime, endTime))
}
func GetAccumulativeStatsTiming(c *gin.Context) {
// for now hardcoded 10 bars of 5 minutes interval
c.JSON(http.StatusOK, providers.GetAccumulativeStatsTiming(300, 10))
func getStartEndTime(c *gin.Context) (time.Time, time.Time, error) {
startTimeValue, err := strconv.Atoi(c.Query("startTimeMs"))
if err != nil {
return time.UnixMilli(0), time.UnixMilli(0), fmt.Errorf("invalid start time: %v", err)
}
endTimeValue, err := strconv.Atoi(c.Query("endTimeMs"))
if err != nil {
return time.UnixMilli(0), time.UnixMilli(0), fmt.Errorf("invalid end time: %v", err)
}
return time.UnixMilli(int64(startTimeValue)), time.UnixMilli(int64(endTimeValue)), nil
}
func GetCurrentResolvingInformation(c *gin.Context) {

View File

@@ -3,11 +3,11 @@ package entries
import (
"encoding/json"
"errors"
"fmt"
"time"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/app"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/shared"
@@ -38,11 +38,20 @@ func (e *BasenineEntriesProvider) GetEntries(entriesRequest *models.EntriesReque
return nil, nil, err
}
extension := app.ExtensionsMap[entry.Protocol.Name]
protocol, ok := app.ProtocolsMap[entry.Protocol.ToString()]
if !ok {
return nil, nil, fmt.Errorf("protocol not found, protocol: %v", protocol)
}
extension, ok := app.ExtensionsMap[protocol.Name]
if !ok {
return nil, nil, fmt.Errorf("extension not found, extension: %v", protocol.Name)
}
base := extension.Dissector.Summarize(entry)
dataSlice = append(dataSlice, &tapApi.EntryWrapper{
Protocol: entry.Protocol,
Protocol: *protocol,
Data: entry,
Base: base,
})
@@ -68,7 +77,16 @@ func (e *BasenineEntriesProvider) GetEntry(singleEntryRequest *models.SingleEntr
return nil, errors.New(string(bytes))
}
extension := app.ExtensionsMap[entry.Protocol.Name]
protocol, ok := app.ProtocolsMap[entry.Protocol.ToString()]
if !ok {
return nil, fmt.Errorf("protocol not found, protocol: %v", protocol)
}
extension, ok := app.ExtensionsMap[protocol.Name]
if !ok {
return nil, fmt.Errorf("extension not found, extension: %v", protocol.Name)
}
base := extension.Dissector.Summarize(entry)
var representation []byte
representation, err = extension.Dissector.Represent(entry.Request, entry.Response)
@@ -76,24 +94,10 @@ func (e *BasenineEntriesProvider) GetEntry(singleEntryRequest *models.SingleEntr
return nil, err
}
var rules []map[string]interface{}
var isRulesEnabled bool
if entry.Protocol.Name == "http" {
harEntry, _ := har.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
_, rulesMatched, _isRulesEnabled := models.RunValidationRulesState(*harEntry, entry.Destination.Name)
isRulesEnabled = _isRulesEnabled
inrec, _ := json.Marshal(rulesMatched)
if err := json.Unmarshal(inrec, &rules); err != nil {
logger.Log.Error(err)
}
}
return &tapApi.EntryWrapper{
Protocol: entry.Protocol,
Protocol: *protocol,
Representation: string(representation),
Data: entry,
Base: base,
Rules: rules,
IsRulesEnabled: isRulesEnabled,
}, nil
}

View File

@@ -11,75 +11,30 @@ import (
"github.com/up9inc/mizu/logger"
)
// Keep it because we might want cookies in the future
//func BuildCookies(rawCookies []interface{}) []har.Cookie {
// cookies := make([]har.Cookie, 0, len(rawCookies))
//
// for _, cookie := range rawCookies {
// c := cookie.(map[string]interface{})
// expiresStr := ""
// if c["expires"] != nil {
// expiresStr = c["expires"].(string)
// }
// expires, _ := time.Parse(time.RFC3339, expiresStr)
// httpOnly := false
// if c["httponly"] != nil {
// httpOnly, _ = strconv.ParseBool(c["httponly"].(string))
// }
// secure := false
// if c["secure"] != nil {
// secure, _ = strconv.ParseBool(c["secure"].(string))
// }
// path := ""
// if c["path"] != nil {
// path = c["path"].(string)
// }
// domain := ""
// if c["domain"] != nil {
// domain = c["domain"].(string)
// }
//
// cookies = append(cookies, har.Cookie{
// Name: c["name"].(string),
// Value: c["value"].(string),
// Path: path,
// Domain: domain,
// HTTPOnly: httpOnly,
// Secure: secure,
// Expires: expires,
// Expires8601: expiresStr,
// })
// }
//
// return cookies
//}
func BuildHeaders(rawHeaders []interface{}) ([]Header, string, string, string, string, string) {
func BuildHeaders(rawHeaders map[string]interface{}) ([]Header, string, string, string, string, string) {
var host, scheme, authority, path, status string
headers := make([]Header, 0, len(rawHeaders))
for _, header := range rawHeaders {
h := header.(map[string]interface{})
for key, value := range rawHeaders {
headers = append(headers, Header{
Name: h["name"].(string),
Value: h["value"].(string),
Name: key,
Value: value.(string),
})
if h["name"] == "Host" {
host = h["value"].(string)
if key == "Host" {
host = value.(string)
}
if h["name"] == ":authority" {
authority = h["value"].(string)
if key == ":authority" {
authority = value.(string)
}
if h["name"] == ":scheme" {
scheme = h["value"].(string)
if key == ":scheme" {
scheme = value.(string)
}
if h["name"] == ":path" {
path = h["value"].(string)
if key == ":path" {
path = value.(string)
}
if h["name"] == ":status" {
status = h["value"].(string)
if key == ":status" {
status = value.(string)
}
}
@@ -119,8 +74,8 @@ func BuildPostParams(rawParams []interface{}) []Param {
}
func NewRequest(request map[string]interface{}) (harRequest *Request, err error) {
headers, host, scheme, authority, path, _ := BuildHeaders(request["_headers"].([]interface{}))
cookies := make([]Cookie, 0) // BuildCookies(request["_cookies"].([]interface{}))
headers, host, scheme, authority, path, _ := BuildHeaders(request["headers"].(map[string]interface{}))
cookies := make([]Cookie, 0)
postData, _ := request["postData"].(map[string]interface{})
mimeType := postData["mimeType"]
@@ -134,12 +89,20 @@ func NewRequest(request map[string]interface{}) (harRequest *Request, err error)
}
queryString := make([]QueryString, 0)
for _, _qs := range request["_queryString"].([]interface{}) {
qs := _qs.(map[string]interface{})
queryString = append(queryString, QueryString{
Name: qs["name"].(string),
Value: qs["value"].(string),
})
for key, value := range request["queryString"].(map[string]interface{}) {
if valuesInterface, ok := value.([]interface{}); ok {
for _, valueInterface := range valuesInterface {
queryString = append(queryString, QueryString{
Name: key,
Value: valueInterface.(string),
})
}
} else {
queryString = append(queryString, QueryString{
Name: key,
Value: value.(string),
})
}
}
url := fmt.Sprintf("http://%s%s", host, request["url"].(string))
@@ -172,8 +135,8 @@ func NewRequest(request map[string]interface{}) (harRequest *Request, err error)
}
func NewResponse(response map[string]interface{}) (harResponse *Response, err error) {
headers, _, _, _, _, _status := BuildHeaders(response["_headers"].([]interface{}))
cookies := make([]Cookie, 0) // BuildCookies(response["_cookies"].([]interface{}))
headers, _, _, _, _, _status := BuildHeaders(response["headers"].(map[string]interface{}))
cookies := make([]Cookie, 0)
content, _ := response["content"].(map[string]interface{})
mimeType := content["mimeType"]

View File

@@ -4,7 +4,6 @@ import (
"encoding/json"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/rules"
tapApi "github.com/up9inc/mizu/tap/api"
basenine "github.com/up9inc/basenine/client/go"
@@ -143,9 +142,3 @@ type ExtendedCreator struct {
*har.Creator
Source *string `json:"_source"`
}
func RunValidationRulesState(harEntry har.Entry, service string) (tapApi.ApplicableRules, []rules.RulesMatched, bool) {
resultPolicyToSend, isEnabled := rules.MatchRequestPolicy(harEntry, service)
statusPolicyToSend, latency, numberOfRules := rules.PassedValidationRules(resultPolicyToSend)
return tapApi.ApplicableRules{Status: statusPolicyToSend, Latency: latency, NumberOfRules: numberOfRules}, resultPolicyToSend, isEnabled
}

View File

@@ -6,9 +6,8 @@ import (
"sync"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/tap/api"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/tap/api"
)
var (

View File

@@ -1,6 +1,9 @@
package providers
import (
"crypto/md5"
"encoding/hex"
"fmt"
"reflect"
"sync"
"time"
@@ -26,7 +29,6 @@ type TimeFrameStatsValue struct {
type ProtocolStats struct {
MethodsStats map[string]*SizeAndEntriesCount `json:"methods"`
Color string `json:"color"`
}
type SizeAndEntriesCount struct {
@@ -36,13 +38,13 @@ type SizeAndEntriesCount struct {
type AccumulativeStatsCounter struct {
Name string `json:"name"`
Color string `json:"color"`
EntriesCount int `json:"entriesCount"`
VolumeSizeBytes int `json:"volumeSizeBytes"`
}
type AccumulativeStatsProtocol struct {
AccumulativeStatsCounter
Color string `json:"color"`
Methods []*AccumulativeStatsCounter `json:"methods"`
}
@@ -51,46 +53,46 @@ type AccumulativeStatsProtocolTime struct {
Time int64 `json:"timestamp"`
}
type TrafficStatsResponse struct {
Protocols []string `json:"protocols"`
PieStats []*AccumulativeStatsProtocol `json:"pie"`
TimelineStats []*AccumulativeStatsProtocolTime `json:"timeline"`
}
var (
generalStats = GeneralStats{}
bucketsStats = BucketStats{}
bucketStatsLocker = sync.Mutex{}
protocolToColor = map[string]string{}
)
const (
InternalBucketThreshold = time.Minute * 1
MaxNumberOfBars = 30
)
func ResetGeneralStats() {
generalStats = GeneralStats{}
}
func GetGeneralStats() GeneralStats {
return generalStats
func GetGeneralStats() *GeneralStats {
return &generalStats
}
func GetAccumulativeStats() []*AccumulativeStatsProtocol {
bucketStatsCopy := getBucketStatsCopy()
if len(bucketStatsCopy) == 0 {
return make([]*AccumulativeStatsProtocol, 0)
func InitProtocolToColor(protocolMap map[string]*api.Protocol) {
for item, value := range protocolMap {
protocolToColor[api.GetProtocolSummary(item).Abbreviation] = value.BackgroundColor
}
methodsPerProtocolAggregated, protocolToColor := getAggregatedStatsAllTime(bucketStatsCopy)
return convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated, protocolToColor)
}
func GetAccumulativeStatsTiming(intervalSeconds int, numberOfBars int) []*AccumulativeStatsProtocolTime {
bucketStatsCopy := getBucketStatsCopy()
if len(bucketStatsCopy) == 0 {
return make([]*AccumulativeStatsProtocolTime, 0)
func GetTrafficStats(startTime time.Time, endTime time.Time) *TrafficStatsResponse {
bucketsStatsCopy := getFilteredBucketStatsCopy(startTime, endTime)
return &TrafficStatsResponse{
Protocols: getAvailableProtocols(bucketsStatsCopy),
PieStats: getAccumulativeStats(bucketsStatsCopy),
TimelineStats: getAccumulativeStatsTiming(bucketsStatsCopy),
}
firstBucketTime := getFirstBucketTime(time.Now().UTC(), intervalSeconds, numberOfBars)
methodsPerProtocolPerTimeAggregated, protocolToColor := getAggregatedResultTimingFromSpecificTime(intervalSeconds, bucketStatsCopy, firstBucketTime)
return convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated, protocolToColor)
}
func EntryAdded(size int, summery *api.BaseEntry) {
@@ -108,6 +110,65 @@ func EntryAdded(size int, summery *api.BaseEntry) {
generalStats.LastEntryTimestamp = currentTimestamp
}
func calculateInterval(firstTimestamp int64, lastTimestamp int64) time.Duration {
validDurations := []time.Duration{
time.Minute,
time.Minute * 2,
time.Minute * 3,
time.Minute * 5,
time.Minute * 10,
time.Minute * 15,
time.Minute * 20,
time.Minute * 30,
time.Minute * 45,
time.Minute * 60,
time.Minute * 75,
time.Minute * 90, // 1.5 minutes
time.Minute * 120, // 2 hours
time.Minute * 150, // 2.5 hours
time.Minute * 180, // 3 hours
time.Minute * 240, // 4 hours
time.Minute * 300, // 5 hours
time.Minute * 360, // 6 hours
time.Minute * 420, // 7 hours
time.Minute * 480, // 8 hours
time.Minute * 540, // 9 hours
time.Minute * 600, // 10 hours
time.Minute * 660, // 11 hours
time.Minute * 720, // 12 hours
time.Minute * 1440, // 24 hours
}
duration := time.Duration(lastTimestamp-firstTimestamp) * time.Second / time.Duration(MaxNumberOfBars)
for _, validDuration := range validDurations {
if validDuration-duration >= 0 {
return validDuration
}
}
return duration.Round(validDurations[len(validDurations)-1])
}
func getAccumulativeStats(stats BucketStats) []*AccumulativeStatsProtocol {
if len(stats) == 0 {
return make([]*AccumulativeStatsProtocol, 0)
}
methodsPerProtocolAggregated := getAggregatedStats(stats)
return convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated)
}
func getAccumulativeStatsTiming(stats BucketStats) []*AccumulativeStatsProtocolTime {
if len(stats) == 0 {
return make([]*AccumulativeStatsProtocolTime, 0)
}
interval := calculateInterval(stats[0].BucketTime.Unix(), stats[len(stats)-1].BucketTime.Unix()) // in seconds
methodsPerProtocolPerTimeAggregated := getAggregatedResultTiming(stats, interval)
return convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated)
}
func addToBucketStats(size int, summery *api.BaseEntry) {
entryTimeBucketRounded := getBucketFromTimeStamp(summery.Timestamp)
@@ -128,7 +189,6 @@ func addToBucketStats(size int, summery *api.BaseEntry) {
if _, found := bucketOfEntry.ProtocolStats[summery.Protocol.Abbreviation]; !found {
bucketOfEntry.ProtocolStats[summery.Protocol.Abbreviation] = ProtocolStats{
MethodsStats: map[string]*SizeAndEntriesCount{},
Color: summery.Protocol.BackgroundColor,
}
}
if _, found := bucketOfEntry.ProtocolStats[summery.Protocol.Abbreviation].MethodsStats[summery.Method]; !found {
@@ -147,21 +207,15 @@ func getBucketFromTimeStamp(timestamp int64) time.Time {
return entryTimeStampAsTime.Add(-1 * InternalBucketThreshold / 2).Round(InternalBucketThreshold)
}
func getFirstBucketTime(endTime time.Time, intervalSeconds int, numberOfBars int) time.Time {
lastBucketTime := endTime.Add(-1 * time.Second * time.Duration(intervalSeconds) / 2).Round(time.Second * time.Duration(intervalSeconds))
firstBucketTime := lastBucketTime.Add(-1 * time.Second * time.Duration(intervalSeconds*(numberOfBars-1)))
return firstBucketTime
}
func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated map[time.Time]map[string]map[string]*AccumulativeStatsCounter, protocolToColor map[string]string) []*AccumulativeStatsProtocolTime {
func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggregated map[time.Time]map[string]map[string]*AccumulativeStatsCounter) []*AccumulativeStatsProtocolTime {
finalResult := make([]*AccumulativeStatsProtocolTime, 0)
for timeKey, item := range methodsPerProtocolPerTimeAggregated {
protocolsData := make([]*AccumulativeStatsProtocol, 0)
for protocolName := range item {
for protocolName, value := range item {
entriesCount := 0
volumeSizeBytes := 0
methods := make([]*AccumulativeStatsCounter, 0)
for _, methodAccData := range methodsPerProtocolPerTimeAggregated[timeKey][protocolName] {
for _, methodAccData := range value {
entriesCount += methodAccData.EntriesCount
volumeSizeBytes += methodAccData.VolumeSizeBytes
methods = append(methods, methodAccData)
@@ -169,10 +223,10 @@ func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggreg
protocolsData = append(protocolsData, &AccumulativeStatsProtocol{
AccumulativeStatsCounter: AccumulativeStatsCounter{
Name: protocolName,
Color: protocolToColor[protocolName],
EntriesCount: entriesCount,
VolumeSizeBytes: volumeSizeBytes,
},
Color: protocolToColor[protocolName],
Methods: methods,
})
}
@@ -184,7 +238,7 @@ func convertAccumulativeStatsTimelineDictToArray(methodsPerProtocolPerTimeAggreg
return finalResult
}
func convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated map[string]map[string]*AccumulativeStatsCounter, protocolToColor map[string]string) []*AccumulativeStatsProtocol {
func convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated map[string]map[string]*AccumulativeStatsCounter) []*AccumulativeStatsProtocol {
protocolsData := make([]*AccumulativeStatsProtocol, 0)
for protocolName, value := range methodsPerProtocolAggregated {
entriesCount := 0
@@ -198,17 +252,17 @@ func convertAccumulativeStatsDictToArray(methodsPerProtocolAggregated map[string
protocolsData = append(protocolsData, &AccumulativeStatsProtocol{
AccumulativeStatsCounter: AccumulativeStatsCounter{
Name: protocolName,
Color: protocolToColor[protocolName],
EntriesCount: entriesCount,
VolumeSizeBytes: volumeSizeBytes,
},
Color: protocolToColor[protocolName],
Methods: methods,
})
}
return protocolsData
}
func getBucketStatsCopy() BucketStats {
func getFilteredBucketStatsCopy(startTime time.Time, endTime time.Time) BucketStats {
bucketStatsCopy := BucketStats{}
bucketStatsLocker.Lock()
if err := copier.Copy(&bucketStatsCopy, bucketsStats); err != nil {
@@ -216,58 +270,59 @@ func getBucketStatsCopy() BucketStats {
return nil
}
bucketStatsLocker.Unlock()
return bucketStatsCopy
filteredBucketStatsCopy := BucketStats{}
interval := InternalBucketThreshold
for _, bucket := range bucketStatsCopy {
if (bucket.BucketTime.After(startTime.Add(-1*interval/2).Round(interval)) && bucket.BucketTime.Before(endTime.Add(-1*interval/2).Round(interval))) ||
bucket.BucketTime.Equal(startTime.Add(-1*interval/2).Round(interval)) ||
bucket.BucketTime.Equal(endTime.Add(-1*interval/2).Round(interval)) {
filteredBucketStatsCopy = append(filteredBucketStatsCopy, bucket)
}
}
return filteredBucketStatsCopy
}
func getAggregatedResultTimingFromSpecificTime(intervalSeconds int, bucketStats BucketStats, firstBucketTime time.Time) (map[time.Time]map[string]map[string]*AccumulativeStatsCounter, map[string]string) {
protocolToColor := map[string]string{}
func getAggregatedResultTiming(stats BucketStats, interval time.Duration) map[time.Time]map[string]map[string]*AccumulativeStatsCounter {
methodsPerProtocolPerTimeAggregated := map[time.Time]map[string]map[string]*AccumulativeStatsCounter{}
bucketStatsIndex := len(bucketStats) - 1
bucketStatsIndex := len(stats) - 1
for bucketStatsIndex >= 0 {
currentBucketTime := bucketStats[bucketStatsIndex].BucketTime
if currentBucketTime.After(firstBucketTime) || currentBucketTime.Equal(firstBucketTime) {
resultBucketRoundedKey := currentBucketTime.Add(-1 * time.Second * time.Duration(intervalSeconds) / 2).Round(time.Second * time.Duration(intervalSeconds))
currentBucketTime := stats[bucketStatsIndex].BucketTime
resultBucketRoundedKey := currentBucketTime.Add(-1 * interval / 2).Round(interval)
for protocolName, data := range bucketStats[bucketStatsIndex].ProtocolStats {
if _, ok := protocolToColor[protocolName]; !ok {
protocolToColor[protocolName] = data.Color
for protocolName, data := range stats[bucketStatsIndex].ProtocolStats {
for methodName, dataOfMethod := range data.MethodsStats {
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey] = map[string]map[string]*AccumulativeStatsCounter{}
}
for methodName, dataOfMethod := range data.MethodsStats {
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey] = map[string]map[string]*AccumulativeStatsCounter{}
}
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName] = map[string]*AccumulativeStatsCounter{}
}
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName] = &AccumulativeStatsCounter{
Name: methodName,
EntriesCount: 0,
VolumeSizeBytes: 0,
}
}
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].EntriesCount += dataOfMethod.EntriesCount
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].VolumeSizeBytes += dataOfMethod.VolumeInBytes
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName] = map[string]*AccumulativeStatsCounter{}
}
if _, ok := methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName]; !ok {
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName] = &AccumulativeStatsCounter{
Name: methodName,
Color: getColorForMethod(protocolName, methodName),
EntriesCount: 0,
VolumeSizeBytes: 0,
}
}
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].EntriesCount += dataOfMethod.EntriesCount
methodsPerProtocolPerTimeAggregated[resultBucketRoundedKey][protocolName][methodName].VolumeSizeBytes += dataOfMethod.VolumeInBytes
}
}
bucketStatsIndex--
}
return methodsPerProtocolPerTimeAggregated, protocolToColor
return methodsPerProtocolPerTimeAggregated
}
func getAggregatedStatsAllTime(bucketStatsCopy BucketStats) (map[string]map[string]*AccumulativeStatsCounter, map[string]string) {
protocolToColor := make(map[string]string, 0)
func getAggregatedStats(stats BucketStats) map[string]map[string]*AccumulativeStatsCounter {
methodsPerProtocolAggregated := make(map[string]map[string]*AccumulativeStatsCounter, 0)
for _, countersOfTimeFrame := range bucketStatsCopy {
for _, countersOfTimeFrame := range stats {
for protocolName, value := range countersOfTimeFrame.ProtocolStats {
if _, ok := protocolToColor[protocolName]; !ok {
protocolToColor[protocolName] = value.Color
}
for method, countersValue := range value.MethodsStats {
if _, found := methodsPerProtocolAggregated[protocolName]; !found {
methodsPerProtocolAggregated[protocolName] = map[string]*AccumulativeStatsCounter{}
@@ -275,6 +330,7 @@ func getAggregatedStatsAllTime(bucketStatsCopy BucketStats) (map[string]map[stri
if _, found := methodsPerProtocolAggregated[protocolName][method]; !found {
methodsPerProtocolAggregated[protocolName][method] = &AccumulativeStatsCounter{
Name: method,
Color: getColorForMethod(protocolName, method),
EntriesCount: 0,
VolumeSizeBytes: 0,
}
@@ -284,5 +340,27 @@ func getAggregatedStatsAllTime(bucketStatsCopy BucketStats) (map[string]map[stri
}
}
}
return methodsPerProtocolAggregated, protocolToColor
return methodsPerProtocolAggregated
}
func getColorForMethod(protocolName string, methodName string) string {
hash := md5.Sum([]byte(fmt.Sprintf("%v_%v", protocolName, methodName)))
input := hex.EncodeToString(hash[:])
return fmt.Sprintf("#%v", input[:6])
}
func getAvailableProtocols(stats BucketStats) []string {
protocols := map[string]bool{}
for _, countersOfTimeFrame := range stats {
for protocolName := range countersOfTimeFrame.ProtocolStats {
protocols[protocolName] = true
}
}
result := make([]string, 0)
for protocol := range protocols {
result = append(result, protocol)
}
result = append(result, "ALL")
return result
}

View File

@@ -2,7 +2,6 @@ package providers
import (
"fmt"
"reflect"
"testing"
"time"
)
@@ -26,38 +25,6 @@ func TestGetBucketOfTimeStamp(t *testing.T) {
}
}
type DataForBucketBorderFunction struct {
EndTime time.Time
IntervalInSeconds int
NumberOfBars int
}
func TestGetBucketBorders(t *testing.T) {
tests := map[DataForBucketBorderFunction]time.Time{
DataForBucketBorderFunction{
time.Date(2022, time.Month(1), 1, 10, 34, 45, 0, time.UTC),
300,
10,
}: time.Date(2022, time.Month(1), 1, 9, 45, 0, 0, time.UTC),
DataForBucketBorderFunction{
time.Date(2022, time.Month(1), 1, 10, 35, 45, 0, time.UTC),
60,
5,
}: time.Date(2022, time.Month(1), 1, 10, 31, 00, 0, time.UTC),
}
for key, value := range tests {
t.Run(fmt.Sprintf("%v", key), func(t *testing.T) {
actual := getFirstBucketTime(key.EndTime, key.IntervalInSeconds, key.NumberOfBars)
if actual != value {
t.Errorf("unexpected result - expected: %v, actual: %v", value, actual)
}
})
}
}
func TestGetAggregatedStatsAllTime(t *testing.T) {
bucketStatsForTest := BucketStats{
&TimeFrameStatsValue{
@@ -140,10 +107,10 @@ func TestGetAggregatedStatsAllTime(t *testing.T) {
},
},
}
actual, _ := getAggregatedStatsAllTime(bucketStatsForTest)
actual := getAggregatedStats(bucketStatsForTest)
if !reflect.DeepEqual(actual, expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", 3, len(actual))
if len(actual) != len(expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", len(expected), len(actual))
}
}
@@ -227,10 +194,10 @@ func TestGetAggregatedStatsFromSpecificTime(t *testing.T) {
},
},
}
actual, _ := getAggregatedResultTimingFromSpecificTime(300, bucketStatsForTest, time.Date(2022, time.Month(1), 1, 10, 00, 00, 0, time.UTC))
actual := getAggregatedResultTiming(bucketStatsForTest, time.Minute*5)
if !reflect.DeepEqual(actual, expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", 3, len(actual))
if len(actual) != len(expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", len(expected), len(actual))
}
}
@@ -323,9 +290,9 @@ func TestGetAggregatedStatsFromSpecificTimeMultipleBuckets(t *testing.T) {
},
},
}
actual, _ := getAggregatedResultTimingFromSpecificTime(60, bucketStatsForTest, time.Date(2022, time.Month(1), 1, 10, 00, 00, 0, time.UTC))
actual := getAggregatedResultTiming(bucketStatsForTest, time.Minute)
if !reflect.DeepEqual(actual, expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", 3, len(actual))
if len(actual) != len(expected) {
t.Errorf("unexpected result - expected: %v, actual: %v", len(expected), len(actual))
}
}

View File

@@ -26,7 +26,7 @@ func TestEntryAddedCount(t *testing.T) {
entryBucketKey := time.Date(2021, 1, 1, 10, 0, 0, 0, time.UTC)
valueLessThanBucketThreshold := time.Second * 130
mockSummery := &api.BaseEntry{Protocol: api.Protocol{Name: "mock"}, Method: "mock-method", Timestamp: entryBucketKey.Add(valueLessThanBucketThreshold).UnixNano()}
mockSummery := &api.BaseEntry{Protocol: api.Protocol{ProtocolSummary: api.ProtocolSummary{Name: "mock"}}, Method: "mock-method", Timestamp: entryBucketKey.Add(valueLessThanBucketThreshold).UnixNano()}
for _, entriesCount := range tests {
t.Run(fmt.Sprintf("%d", entriesCount), func(t *testing.T) {
for i := 0; i < entriesCount; i++ {
@@ -61,7 +61,7 @@ func TestEntryAddedVolume(t *testing.T) {
var expectedEntriesCount int
var expectedVolumeInGB float64
mockSummery := &api.BaseEntry{Protocol: api.Protocol{Name: "mock"}, Method: "mock-method", Timestamp: time.Date(2021, 1, 1, 10, 0, 0, 0, time.UTC).UnixNano()}
mockSummery := &api.BaseEntry{Protocol: api.Protocol{ProtocolSummary: api.ProtocolSummary{Name: "mock"}}, Method: "mock-method", Timestamp: time.Date(2021, 1, 1, 10, 0, 0, 0, time.UTC).UnixNano()}
for _, data := range tests {
t.Run(fmt.Sprintf("%d", len(data)), func(t *testing.T) {

View File

@@ -60,7 +60,7 @@ func GetTappedPodsStatus() []shared.TappedPodStatus {
func SetNodeToTappedPodMap(nodeToTappedPodsMap shared.NodeToPodsMap) {
summary := nodeToTappedPodsMap.Summary()
logger.Log.Infof("Setting node to tapped pods map to %v", summary)
logger.Log.Debugf("Setting node to tapped pods map to %v", summary)
nodeHostToTappedPodsMap = nodeToTappedPodsMap
}

184
agent/pkg/replay/replay.go Normal file
View File

@@ -0,0 +1,184 @@
package replay
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"strings"
"sync"
"time"
"github.com/google/uuid"
"github.com/up9inc/mizu/agent/pkg/app"
tapApi "github.com/up9inc/mizu/tap/api"
mizuhttp "github.com/up9inc/mizu/tap/extensions/http"
)
var (
inProcessRequestsLocker = sync.Mutex{}
inProcessRequests = 0
)
const maxParallelAction = 5
type Details struct {
Method string `json:"method"`
Url string `json:"url"`
Body string `json:"body"`
Headers map[string]string `json:"headers"`
}
type Response struct {
Success bool `json:"status"`
Data interface{} `json:"data"`
ErrorMessage string `json:"errorMessage"`
}
func incrementCounter() bool {
result := false
inProcessRequestsLocker.Lock()
if inProcessRequests < maxParallelAction {
inProcessRequests++
result = true
}
inProcessRequestsLocker.Unlock()
return result
}
func decrementCounter() {
inProcessRequestsLocker.Lock()
inProcessRequests--
inProcessRequestsLocker.Unlock()
}
func getEntryFromRequestResponse(extension *tapApi.Extension, request *http.Request, response *http.Response) *tapApi.Entry {
captureTime := time.Now()
itemTmp := tapApi.OutputChannelItem{
Protocol: *extension.Protocol,
ConnectionInfo: &tapApi.ConnectionInfo{
ClientIP: "",
ClientPort: "1",
ServerIP: "",
ServerPort: "1",
IsOutgoing: false,
},
Capture: "",
Timestamp: time.Now().UnixMilli(),
Pair: &tapApi.RequestResponsePair{
Request: tapApi.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
CaptureSize: 0,
Payload: &mizuhttp.HTTPPayload{
Type: mizuhttp.TypeHttpRequest,
Data: request,
},
},
Response: tapApi.GenericMessage{
IsRequest: false,
CaptureTime: captureTime,
CaptureSize: 0,
Payload: &mizuhttp.HTTPPayload{
Type: mizuhttp.TypeHttpResponse,
Data: response,
},
},
},
}
// Analyze is expecting an item that's marshalled and unmarshalled
itemMarshalled, err := json.Marshal(itemTmp)
if err != nil {
return nil
}
var finalItem *tapApi.OutputChannelItem
if err := json.Unmarshal(itemMarshalled, &finalItem); err != nil {
return nil
}
return extension.Dissector.Analyze(finalItem, "", "", "")
}
func ExecuteRequest(replayData *Details, timeout time.Duration) *Response {
if incrementCounter() {
defer decrementCounter()
client := &http.Client{
Timeout: timeout,
}
request, err := http.NewRequest(strings.ToUpper(replayData.Method), replayData.Url, bytes.NewBufferString(replayData.Body))
if err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
for headerKey, headerValue := range replayData.Headers {
request.Header.Add(headerKey, headerValue)
}
request.Header.Add("x-mizu", uuid.New().String())
response, requestErr := client.Do(request)
if requestErr != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: requestErr.Error(),
}
}
extension := app.ExtensionsMap["http"] // # TODO: maybe pass the extension to the function so it can be tested
entry := getEntryFromRequestResponse(extension, request, response)
base := extension.Dissector.Summarize(entry)
var representation []byte
// Represent is expecting an entry that's marshalled and unmarshalled
entryMarshalled, err := json.Marshal(entry)
if err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
var entryUnmarshalled *tapApi.Entry
if err := json.Unmarshal(entryMarshalled, &entryUnmarshalled); err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
representation, err = extension.Dissector.Represent(entryUnmarshalled.Request, entryUnmarshalled.Response)
if err != nil {
return &Response{
Success: false,
Data: nil,
ErrorMessage: err.Error(),
}
}
return &Response{
Success: true,
Data: &tapApi.EntryWrapper{
Protocol: *extension.Protocol,
Representation: string(representation),
Data: entryUnmarshalled,
Base: base,
},
ErrorMessage: "",
}
} else {
return &Response{
Success: false,
Data: nil,
ErrorMessage: fmt.Sprintf("reached threshold of %d requests", maxParallelAction),
}
}
}

View File

@@ -0,0 +1,106 @@
package replay
import (
"bytes"
"fmt"
"net/http"
"strings"
"testing"
"time"
"encoding/json"
"github.com/google/uuid"
tapApi "github.com/up9inc/mizu/tap/api"
mizuhttp "github.com/up9inc/mizu/tap/extensions/http"
)
func TestValid(t *testing.T) {
client := &http.Client{
Timeout: 10 * time.Second,
}
tests := map[string]*Details{
"40x": {
Method: "GET",
Url: "http://httpbin.org/status/404",
Body: "",
Headers: map[string]string{},
},
"20x": {
Method: "GET",
Url: "http://httpbin.org/status/200",
Body: "",
Headers: map[string]string{},
},
"50x": {
Method: "GET",
Url: "http://httpbin.org/status/500",
Body: "",
Headers: map[string]string{},
},
// TODO: this should be fixes, currently not working because of header name with ":"
//":path-header": {
// Method: "GET",
// Url: "http://httpbin.org/get",
// Body: "",
// Headers: map[string]string{
// ":path": "/get",
// },
// },
}
for testCaseName, replayData := range tests {
t.Run(fmt.Sprintf("%+v", testCaseName), func(t *testing.T) {
request, err := http.NewRequest(strings.ToUpper(replayData.Method), replayData.Url, bytes.NewBufferString(replayData.Body))
if err != nil {
t.Errorf("Error executing request")
}
for headerKey, headerValue := range replayData.Headers {
request.Header.Add(headerKey, headerValue)
}
request.Header.Add("x-mizu", uuid.New().String())
response, requestErr := client.Do(request)
if requestErr != nil {
t.Errorf("failed: %v, ", requestErr)
}
extensionHttp := &tapApi.Extension{}
dissectorHttp := mizuhttp.NewDissector()
dissectorHttp.Register(extensionHttp)
extensionHttp.Dissector = dissectorHttp
extension := extensionHttp
entry := getEntryFromRequestResponse(extension, request, response)
base := extension.Dissector.Summarize(entry)
// Represent is expecting an entry that's marshalled and unmarshalled
entryMarshalled, err := json.Marshal(entry)
if err != nil {
t.Errorf("failed marshaling entry: %v, ", err)
}
var entryUnmarshalled *tapApi.Entry
if err := json.Unmarshal(entryMarshalled, &entryUnmarshalled); err != nil {
t.Errorf("failed unmarshaling entry: %v, ", err)
}
var representation []byte
representation, err = extension.Dissector.Represent(entryUnmarshalled.Request, entryUnmarshalled.Response)
if err != nil {
t.Errorf("failed: %v, ", err)
}
result := &tapApi.EntryWrapper{
Protocol: *extension.Protocol,
Representation: string(representation),
Data: entry,
Base: base,
}
t.Logf("%+v", result)
//data, _ := json.MarshalIndent(result, "", " ")
//t.Logf("%+v", string(data))
})
}
}

View File

@@ -0,0 +1,13 @@
package routes
import (
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/controllers"
)
// ReplayRoutes defines the group of replay routes.
func ReplayRoutes(app *gin.Engine) {
routeGroup := app.Group("/replay")
routeGroup.POST("/", controllers.ReplayRequest)
}

View File

@@ -15,9 +15,8 @@ func StatusRoutes(ginApp *gin.Engine) {
routeGroup.GET("/connectedTappersCount", controllers.GetConnectedTappersCount)
routeGroup.GET("/tap", controllers.GetTappingStatus)
routeGroup.GET("/general", controllers.GetGeneralStats) // get general stats about entries in DB
routeGroup.GET("/accumulative", controllers.GetAccumulativeStats)
routeGroup.GET("/accumulativeTiming", controllers.GetAccumulativeStatsTiming)
routeGroup.GET("/general", controllers.GetGeneralStats)
routeGroup.GET("/trafficStats", controllers.GetTrafficStats)
routeGroup.GET("/resolving", controllers.GetCurrentResolvingInformation)
}

View File

@@ -1,124 +0,0 @@
package rules
import (
"encoding/base64"
"encoding/json"
"fmt"
"reflect"
"regexp"
"strings"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/shared"
"github.com/yalp/jsonpath"
)
type RulesMatched struct {
Matched bool `json:"matched"`
Rule shared.RulePolicy `json:"rule"`
}
func appendRulesMatched(rulesMatched []RulesMatched, matched bool, rule shared.RulePolicy) []RulesMatched {
return append(rulesMatched, RulesMatched{Matched: matched, Rule: rule})
}
func ValidatePath(URLFromRule string, URL string) bool {
if URLFromRule != "" {
matchPath, err := regexp.MatchString(URLFromRule, URL)
if err != nil || !matchPath {
return false
}
}
return true
}
func ValidateService(serviceFromRule string, service string) bool {
if serviceFromRule != "" {
matchService, err := regexp.MatchString(serviceFromRule, service)
if err != nil || !matchService {
return false
}
}
return true
}
func MatchRequestPolicy(harEntry har.Entry, service string) (resultPolicyToSend []RulesMatched, isEnabled bool) {
enforcePolicy, err := shared.DecodeEnforcePolicy(fmt.Sprintf("%s%s", shared.ConfigDirPath, shared.ValidationRulesFileName))
if err == nil && len(enforcePolicy.Rules) > 0 {
isEnabled = true
}
for _, rule := range enforcePolicy.Rules {
if !ValidatePath(rule.Path, harEntry.Request.URL) || !ValidateService(rule.Service, service) {
continue
}
if rule.Type == "json" {
var bodyJsonMap interface{}
contentTextDecoded, _ := base64.StdEncoding.DecodeString(harEntry.Response.Content.Text)
if err := json.Unmarshal(contentTextDecoded, &bodyJsonMap); err != nil {
continue
}
out, err := jsonpath.Read(bodyJsonMap, rule.Key)
if err != nil || out == nil {
continue
}
var matchValue bool
if reflect.TypeOf(out).Kind() == reflect.String {
matchValue, err = regexp.MatchString(rule.Value, out.(string))
if err != nil {
continue
}
logger.Log.Info(matchValue, rule.Value)
} else {
val := fmt.Sprint(out)
matchValue, err = regexp.MatchString(rule.Value, val)
if err != nil {
continue
}
}
resultPolicyToSend = appendRulesMatched(resultPolicyToSend, matchValue, rule)
} else if rule.Type == "header" {
for j := range harEntry.Response.Headers {
matchKey, err := regexp.MatchString(rule.Key, harEntry.Response.Headers[j].Name)
if err != nil {
continue
}
if matchKey {
matchValue, err := regexp.MatchString(rule.Value, harEntry.Response.Headers[j].Value)
if err != nil {
continue
}
resultPolicyToSend = appendRulesMatched(resultPolicyToSend, matchValue, rule)
}
}
} else {
resultPolicyToSend = appendRulesMatched(resultPolicyToSend, true, rule)
}
}
return
}
func PassedValidationRules(rulesMatched []RulesMatched) (bool, int64, int) {
var numberOfRulesMatched = len(rulesMatched)
var responseTime int64 = -1
if numberOfRulesMatched == 0 {
return false, 0, numberOfRulesMatched
}
for _, rule := range rulesMatched {
if !rule.Matched {
return false, responseTime, numberOfRulesMatched
} else {
if strings.ToLower(rule.Rule.Type) == "slo" {
if rule.Rule.ResponseTime < responseTime || responseTime == -1 {
responseTime = rule.Rule.ResponseTime
}
}
}
}
return true, responseTime, numberOfRulesMatched
}

View File

@@ -50,11 +50,13 @@ var (
IP: fmt.Sprintf("%s.%s", Ip, UnresolvedNodeName),
}
ProtocolHttp = &tapApi.Protocol{
Name: "http",
ProtocolSummary: tapApi.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -63,11 +65,13 @@ var (
Priority: 0,
}
ProtocolRedis = &tapApi.Protocol{
Name: "redis",
ProtocolSummary: tapApi.ProtocolSummary{
Name: "redis",
Version: "3.x",
Abbreviation: "REDIS",
},
LongName: "Redis Serialization Protocol",
Abbreviation: "REDIS",
Macro: "redis",
Version: "3.x",
BackgroundColor: "#a41e11",
ForegroundColor: "#ffffff",
FontSize: 11,

View File

@@ -48,13 +48,12 @@ func init() {
tapCmd.Flags().Uint16P(configStructs.GuiPortTapName, "p", defaultTapConfig.GuiPort, "Provide a custom port for the web interface webserver")
tapCmd.Flags().StringSliceP(configStructs.NamespacesTapName, "n", defaultTapConfig.Namespaces, "Namespaces selector")
tapCmd.Flags().BoolP(configStructs.AllNamespacesTapName, "A", defaultTapConfig.AllNamespaces, "Tap all namespaces")
tapCmd.Flags().StringSliceP(configStructs.PlainTextFilterRegexesTapName, "r", defaultTapConfig.PlainTextFilterRegexes, "List of regex expressions that are used to filter matching values from text/plain http bodies")
tapCmd.Flags().Bool(configStructs.EnableRedactionTapName, defaultTapConfig.EnableRedaction, "Enables redaction of potentially sensitive request/response headers and body values")
tapCmd.Flags().String(configStructs.HumanMaxEntriesDBSizeTapName, defaultTapConfig.HumanMaxEntriesDBSize, "Override the default max entries db size")
tapCmd.Flags().String(configStructs.InsertionFilterName, defaultTapConfig.InsertionFilter, "Set the insertion filter. Accepts string or a file path.")
tapCmd.Flags().Bool(configStructs.DryRunTapName, defaultTapConfig.DryRun, "Preview of all pods matching the regex, without tapping them")
tapCmd.Flags().String(configStructs.EnforcePolicyFile, defaultTapConfig.EnforcePolicyFile, "Yaml file path with policy rules")
tapCmd.Flags().Bool(configStructs.ServiceMeshName, defaultTapConfig.ServiceMesh, "Record decrypted traffic if the cluster is configured with a service mesh and with mtls")
tapCmd.Flags().Bool(configStructs.TlsName, defaultTapConfig.Tls, "Record tls traffic")
tapCmd.Flags().Bool(configStructs.ProfilerName, defaultTapConfig.Profiler, "Run pprof server")
tapCmd.Flags().Int(configStructs.MaxLiveStreamsName, defaultTapConfig.MaxLiveStreams, "Maximum live tcp streams to handle concurrently")
}

View File

@@ -12,7 +12,6 @@ import (
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/cli/utils"
"gopkg.in/yaml.v3"
core "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -45,16 +44,6 @@ func RunMizuTap() {
apiProvider = apiserver.NewProvider(GetApiServerUrl(config.Config.Tap.GuiPort), apiserver.DefaultRetries, apiserver.DefaultTimeout)
var err error
var serializedValidationRules string
if config.Config.Tap.EnforcePolicyFile != "" {
serializedValidationRules, err = readValidationRules(config.Config.Tap.EnforcePolicyFile)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error reading policy file: %v", errormessage.FormatError(err)))
return
}
}
kubernetesProvider, err := getKubernetesProviderForCli()
if err != nil {
return
@@ -98,7 +87,7 @@ func RunMizuTap() {
}
logger.Log.Infof("Waiting for Mizu Agent to start...")
if state.mizuServiceAccountExists, err = resources.CreateTapMizuResources(ctx, kubernetesProvider, serializedValidationRules, serializedMizuConfig, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace, config.Config.AgentImage, config.Config.Tap.MaxEntriesDBSizeBytes(), config.Config.Tap.ApiServerResources, config.Config.ImagePullPolicy(), config.Config.LogLevel(), config.Config.Tap.Profiler); err != nil {
if state.mizuServiceAccountExists, err = resources.CreateTapMizuResources(ctx, kubernetesProvider, serializedMizuConfig, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace, config.Config.AgentImage, config.Config.Tap.MaxEntriesDBSizeBytes(), config.Config.Tap.ApiServerResources, config.Config.ImagePullPolicy(), config.Config.LogLevel(), config.Config.Tap.Profiler); err != nil {
var statusError *k8serrors.StatusError
if errors.As(err, &statusError) && (statusError.ErrStatus.Reason == metav1.StatusReasonAlreadyExists) {
logger.Log.Info("Mizu is already running in this namespace, change the `mizu-resources-namespace` configuration or run `mizu clean` to remove the currently running Mizu instance")
@@ -162,20 +151,22 @@ func printTappedPodsPreview(ctx context.Context, kubernetesProvider *kubernetes.
}
}
func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider *kubernetes.Provider, targetNamespaces []string, mizuApiFilteringOptions api.TrafficFilteringOptions, startTime time.Time) error {
func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider *kubernetes.Provider, targetNamespaces []string, startTime time.Time) error {
tapperSyncer, err := kubernetes.CreateAndStartMizuTapperSyncer(ctx, provider, kubernetes.TapperSyncerConfig{
TargetNamespaces: targetNamespaces,
PodFilterRegex: *config.Config.Tap.PodRegex(),
MizuResourcesNamespace: config.Config.MizuResourcesNamespace,
AgentImage: config.Config.AgentImage,
TapperResources: config.Config.Tap.TapperResources,
ImagePullPolicy: config.Config.ImagePullPolicy(),
LogLevel: config.Config.LogLevel(),
IgnoredUserAgents: config.Config.Tap.IgnoredUserAgents,
MizuApiFilteringOptions: mizuApiFilteringOptions,
TargetNamespaces: targetNamespaces,
PodFilterRegex: *config.Config.Tap.PodRegex(),
MizuResourcesNamespace: config.Config.MizuResourcesNamespace,
AgentImage: config.Config.AgentImage,
TapperResources: config.Config.Tap.TapperResources,
ImagePullPolicy: config.Config.ImagePullPolicy(),
LogLevel: config.Config.LogLevel(),
MizuApiFilteringOptions: api.TrafficFilteringOptions{
IgnoredUserAgents: config.Config.Tap.IgnoredUserAgents,
},
MizuServiceAccountExists: state.mizuServiceAccountExists,
ServiceMesh: config.Config.Tap.ServiceMesh,
Tls: config.Config.Tap.Tls,
MaxLiveStreams: config.Config.Tap.MaxLiveStreams,
}, startTime)
if err != nil {
@@ -239,36 +230,6 @@ func getErrorDisplayTextForK8sTapManagerError(err kubernetes.K8sTapManagerError)
}
}
func readValidationRules(file string) (string, error) {
rules, err := shared.DecodeEnforcePolicy(file)
if err != nil {
return "", err
}
newContent, _ := yaml.Marshal(&rules)
return string(newContent), nil
}
func getMizuApiFilteringOptions() (*api.TrafficFilteringOptions, error) {
var compiledRegexSlice []*api.SerializableRegexp
if config.Config.Tap.PlainTextFilterRegexes != nil && len(config.Config.Tap.PlainTextFilterRegexes) > 0 {
compiledRegexSlice = make([]*api.SerializableRegexp, 0)
for _, regexStr := range config.Config.Tap.PlainTextFilterRegexes {
compiledRegex, err := api.CompileRegexToSerializableRegexp(regexStr)
if err != nil {
return nil, err
}
compiledRegexSlice = append(compiledRegexSlice, compiledRegex)
}
}
return &api.TrafficFilteringOptions{
PlainTextMaskingRegexes: compiledRegexSlice,
IgnoredUserAgents: config.Config.Tap.IgnoredUserAgents,
EnableRedaction: config.Config.Tap.EnableRedaction,
}, nil
}
func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s$", kubernetes.ApiServerPodName))
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
@@ -386,8 +347,7 @@ func watchApiServerEvents(ctx context.Context, kubernetesProvider *kubernetes.Pr
func postApiServerStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel, config.Config.Tap.GuiPort)
options, _ := getMizuApiFilteringOptions()
if err := startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, *options, state.startTime); err != nil {
if err := startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, state.startTime); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error starting mizu tapper syncer: %v", errormessage.FormatError(err)))
cancel()
}

View File

@@ -6,6 +6,7 @@ import (
"io/ioutil"
"os"
"regexp"
"strings"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
@@ -15,38 +16,43 @@ import (
)
const (
GuiPortTapName = "gui-port"
NamespacesTapName = "namespaces"
AllNamespacesTapName = "all-namespaces"
PlainTextFilterRegexesTapName = "regex-masking"
EnableRedactionTapName = "redact"
HumanMaxEntriesDBSizeTapName = "max-entries-db-size"
InsertionFilterName = "insertion-filter"
DryRunTapName = "dry-run"
EnforcePolicyFile = "traffic-validation-file"
ServiceMeshName = "service-mesh"
TlsName = "tls"
ProfilerName = "profiler"
GuiPortTapName = "gui-port"
NamespacesTapName = "namespaces"
AllNamespacesTapName = "all-namespaces"
EnableRedactionTapName = "redact"
HumanMaxEntriesDBSizeTapName = "max-entries-db-size"
InsertionFilterName = "insertion-filter"
DryRunTapName = "dry-run"
ServiceMeshName = "service-mesh"
TlsName = "tls"
ProfilerName = "profiler"
MaxLiveStreamsName = "max-live-streams"
)
type TapConfig struct {
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
ProxyHost string `yaml:"proxy-host" default:"127.0.0.1"`
Namespaces []string `yaml:"namespaces"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
PlainTextFilterRegexes []string `yaml:"regex-masking"`
IgnoredUserAgents []string `yaml:"ignored-user-agents"`
EnableRedaction bool `yaml:"redact" default:"false"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
InsertionFilter string `yaml:"insertion-filter" default:""`
DryRun bool `yaml:"dry-run" default:"false"`
EnforcePolicyFile string `yaml:"traffic-validation-file"`
ApiServerResources shared.Resources `yaml:"api-server-resources"`
TapperResources shared.Resources `yaml:"tapper-resources"`
ServiceMesh bool `yaml:"service-mesh" default:"false"`
Tls bool `yaml:"tls" default:"false"`
Profiler bool `yaml:"profiler" default:"false"`
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
ProxyHost string `yaml:"proxy-host" default:"127.0.0.1"`
Namespaces []string `yaml:"namespaces"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
IgnoredUserAgents []string `yaml:"ignored-user-agents"`
EnableRedaction bool `yaml:"redact" default:"false"`
RedactPatterns struct {
RequestHeaders []string `yaml:"request-headers"`
ResponseHeaders []string `yaml:"response-headers"`
RequestBody []string `yaml:"request-body"`
ResponseBody []string `yaml:"response-body"`
RequestQueryParams []string `yaml:"request-query-params"`
} `yaml:"redact-patterns"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
InsertionFilter string `yaml:"insertion-filter" default:""`
DryRun bool `yaml:"dry-run" default:"false"`
ApiServerResources shared.Resources `yaml:"api-server-resources"`
TapperResources shared.Resources `yaml:"tapper-resources"`
ServiceMesh bool `yaml:"service-mesh" default:"false"`
Tls bool `yaml:"tls" default:"false"`
Profiler bool `yaml:"profiler" default:"false"`
MaxLiveStreams int `yaml:"max-live-streams" default:"500"`
}
func (config *TapConfig) PodRegex() *regexp.Regexp {
@@ -71,9 +77,48 @@ func (config *TapConfig) GetInsertionFilter() string {
}
}
}
redactFilter := getRedactFilter(config)
if insertionFilter != "" && redactFilter != "" {
return fmt.Sprintf("(%s) and (%s)", insertionFilter, redactFilter)
} else if insertionFilter == "" && redactFilter != "" {
return redactFilter
}
return insertionFilter
}
func getRedactFilter(config *TapConfig) string {
if !config.EnableRedaction {
return ""
}
var redactValues []string
for _, requestHeader := range config.RedactPatterns.RequestHeaders {
redactValues = append(redactValues, fmt.Sprintf("request.headers['%s']", requestHeader))
}
for _, responseHeader := range config.RedactPatterns.ResponseHeaders {
redactValues = append(redactValues, fmt.Sprintf("response.headers['%s']", responseHeader))
}
for _, requestBody := range config.RedactPatterns.RequestBody {
redactValues = append(redactValues, fmt.Sprintf("request.postData.text.json()...%s", requestBody))
}
for _, responseBody := range config.RedactPatterns.ResponseBody {
redactValues = append(redactValues, fmt.Sprintf("response.content.text.json()...%s", responseBody))
}
for _, requestQueryParams := range config.RedactPatterns.RequestQueryParams {
redactValues = append(redactValues, fmt.Sprintf("request.queryString['%s']", requestQueryParams))
}
if len(redactValues) == 0 {
return ""
}
return fmt.Sprintf("redact(\"%s\")", strings.Join(redactValues, "\",\""))
}
func (config *TapConfig) Validate() error {
_, compileErr := regexp.Compile(config.PodRegexStr)
if compileErr != nil {

View File

@@ -14,14 +14,14 @@ import (
core "k8s.io/api/core/v1"
)
func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level, profiler bool) (bool, error) {
func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level, profiler bool) (bool, error) {
if !isNsRestrictedMode {
if err := createMizuNamespace(ctx, kubernetesProvider, mizuResourcesNamespace); err != nil {
return false, err
}
}
if err := createMizuConfigmap(ctx, kubernetesProvider, serializedValidationRules, serializedMizuConfig, mizuResourcesNamespace); err != nil {
if err := createMizuConfigmap(ctx, kubernetesProvider, serializedMizuConfig, mizuResourcesNamespace); err != nil {
return false, err
}
@@ -71,8 +71,8 @@ func createMizuNamespace(ctx context.Context, kubernetesProvider *kubernetes.Pro
return err
}
func createMizuConfigmap(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedMizuConfig string, mizuResourcesNamespace string) error {
err := kubernetesProvider.CreateConfigMap(ctx, mizuResourcesNamespace, kubernetes.ConfigMapName, serializedValidationRules, serializedMizuConfig)
func createMizuConfigmap(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedMizuConfig string, mizuResourcesNamespace string) error {
err := kubernetesProvider.CreateConfigMap(ctx, mizuResourcesNamespace, kubernetes.ConfigMapName, serializedMizuConfig)
return err
}

View File

@@ -57,7 +57,7 @@ log "Writing output to $MIZU_BENCHMARK_OUTPUT_DIR"
cd $MIZU_HOME || exit 1
export HOST_MODE=0
export SENSITIVE_DATA_FILTERING_OPTIONS='{"EnableRedaction": false}'
export SENSITIVE_DATA_FILTERING_OPTIONS='{}'
export MIZU_DEBUG_DISABLE_PCAP=false
export MIZU_DEBUG_DISABLE_TCP_REASSEMBLY=false
export MIZU_DEBUG_DISABLE_TCP_STREAM=false

View File

@@ -6,7 +6,6 @@ const (
NodeNameEnvVar = "NODE_NAME"
ConfigDirPath = "/app/config/"
DataDirPath = "/app/data/"
ValidationRulesFileName = "validation-rules.yaml"
ConfigFileName = "mizu-config.json"
DefaultApiServerPort = 8899
LogLevelEnvVar = "LOG_LEVEL"

View File

@@ -4,11 +4,9 @@ go 1.17
require (
github.com/docker/go-units v0.4.0
github.com/golang-jwt/jwt/v4 v4.2.0
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
github.com/up9inc/mizu/logger v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
k8s.io/api v0.23.3
k8s.io/apimachinery v0.23.3
k8s.io/client-go v0.23.3
@@ -38,11 +36,11 @@ require (
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/go-cmp v0.5.7 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/martian v2.1.0+incompatible // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
@@ -81,6 +79,7 @@ require (
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/cli-runtime v0.23.3 // indirect
k8s.io/component-base v0.23.3 // indirect
k8s.io/klog/v2 v2.40.1 // indirect

View File

@@ -282,7 +282,6 @@ github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=

View File

@@ -43,11 +43,11 @@ type TapperSyncerConfig struct {
TapperResources shared.Resources
ImagePullPolicy core.PullPolicy
LogLevel logging.Level
IgnoredUserAgents []string
MizuApiFilteringOptions api.TrafficFilteringOptions
MizuServiceAccountExists bool
ServiceMesh bool
Tls bool
MaxLiveStreams int
}
func CreateAndStartMizuTapperSyncer(ctx context.Context, kubernetesProvider *Provider, config TapperSyncerConfig, startTime time.Time) (*MizuTapperSyncer, error) {
@@ -337,7 +337,8 @@ func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
tapperSyncer.config.MizuApiFilteringOptions,
tapperSyncer.config.LogLevel,
tapperSyncer.config.ServiceMesh,
tapperSyncer.config.Tls); err != nil {
tapperSyncer.config.Tls,
tapperSyncer.config.MaxLiveStreams); err != nil {
return err
}

View File

@@ -10,6 +10,7 @@ import (
"net/url"
"path/filepath"
"regexp"
"strconv"
"github.com/op/go-logging"
"github.com/up9inc/mizu/logger"
@@ -382,11 +383,11 @@ func (provider *Provider) GetMizuApiServerPodObject(opts *ApiServerOptions, moun
Tolerations: []core.Toleration{
{
Operator: core.TolerationOpExists,
Effect: core.TaintEffectNoExecute,
Effect: core.TaintEffectNoExecute,
},
{
Operator: core.TolerationOpExists,
Effect: core.TaintEffectNoSchedule,
Effect: core.TaintEffectNoSchedule,
},
},
},
@@ -684,11 +685,8 @@ func (provider *Provider) handleRemovalError(err error) error {
return err
}
func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string, configMapName string, serializedValidationRules string, serializedMizuConfig string) error {
func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string, configMapName string, serializedMizuConfig string) error {
configMapData := make(map[string]string)
if serializedValidationRules != "" {
configMapData[shared.ValidationRulesFileName] = serializedValidationRules
}
configMapData[shared.ConfigFileName] = serializedMizuConfig
configMap := &core.ConfigMap{
@@ -711,7 +709,7 @@ func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string,
return nil
}
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeNames []string, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level, serviceMesh bool, tls bool) error {
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeNames []string, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level, serviceMesh bool, tls bool, maxLiveStreams int) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeNames), namespace, daemonSetName, podImage, tapperPodName)
if len(nodeNames) == 0 {
@@ -729,6 +727,7 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
"--tap",
"--api-server-address", fmt.Sprintf("ws://%s/wsTapper", apiServerPodIp),
"--nodefrag",
"--max-live-streams", strconv.Itoa(maxLiveStreams),
}
if serviceMesh {

View File

@@ -1,13 +1,8 @@
package shared
import (
"io/ioutil"
"strings"
"github.com/op/go-logging"
"github.com/up9inc/mizu/logger"
"gopkg.in/yaml.v3"
v1 "k8s.io/api/core/v1"
)
@@ -55,7 +50,6 @@ type WebSocketMessageMetadata struct {
MessageType WebSocketMessageType `json:"messageType,omitempty"`
}
type WebSocketStatusMessage struct {
*WebSocketMessageMetadata
TappingStatus []TappedPodStatus `json:"tappingStatus"`
@@ -136,83 +130,3 @@ type HealthResponse struct {
type VersionResponse struct {
Ver string `json:"ver"`
}
type RulesPolicy struct {
Rules []RulePolicy `yaml:"rules"`
}
type RulePolicy struct {
Type string `yaml:"type"`
Service string `yaml:"service"`
Path string `yaml:"path"`
Method string `yaml:"method"`
Key string `yaml:"key"`
Value string `yaml:"value"`
ResponseTime int64 `yaml:"response-time"`
Name string `yaml:"name"`
}
type RulesMatched struct {
Matched bool `json:"matched"`
Rule RulePolicy `json:"rule"`
}
func (r *RulePolicy) validateType() bool {
permitedTypes := []string{"json", "header", "slo"}
_, found := Find(permitedTypes, r.Type)
if !found {
logger.Log.Errorf("Only json, header and slo types are supported on rule definition. This rule will be ignored. rule name: %s", r.Name)
found = false
}
if strings.ToLower(r.Type) == "slo" {
if r.ResponseTime <= 0 {
logger.Log.Errorf("When rule type is slo, the field response-time should be specified and have a value >= 1. rule name: %s", r.Name)
found = false
}
}
return found
}
func (rules *RulesPolicy) ValidateRulesPolicy() []int {
invalidIndex := make([]int, 0)
for i := range rules.Rules {
validated := rules.Rules[i].validateType()
if !validated {
invalidIndex = append(invalidIndex, i)
}
}
return invalidIndex
}
func Find(slice []string, val string) (int, bool) {
for i, item := range slice {
if item == val {
return i, true
}
}
return -1, false
}
func DecodeEnforcePolicy(path string) (RulesPolicy, error) {
content, err := ioutil.ReadFile(path)
enforcePolicy := RulesPolicy{}
if err != nil {
return enforcePolicy, err
}
err = yaml.Unmarshal(content, &enforcePolicy)
if err != nil {
return enforcePolicy, err
}
invalidIndex := enforcePolicy.ValidateRulesPolicy()
var k = 0
if len(invalidIndex) != 0 {
for i, rule := range enforcePolicy.Rules {
if !ContainsInt(invalidIndex, i) {
enforcePolicy.Rules[k] = rule
k++
}
}
enforcePolicy.Rules = enforcePolicy.Rules[:k]
}
return enforcePolicy, nil
}

View File

@@ -10,15 +10,6 @@ func Contains(slice []string, containsValue string) bool {
return false
}
func ContainsInt(slice []int, containsValue int) bool {
for _, sliceValue := range slice {
if sliceValue == containsValue {
return true
}
}
return false
}
func Unique(slice []string) []string {
keys := make(map[string]bool)
var list []string

View File

@@ -2,7 +2,9 @@ package api
import (
"bufio"
"fmt"
"net"
"strings"
"sync"
"time"
@@ -14,12 +16,29 @@ const UnknownNamespace = ""
var UnknownIp = net.IP{0, 0, 0, 0}
var UnknownPort uint16 = 0
type ProtocolSummary struct {
Name string `json:"name"`
Version string `json:"version"`
Abbreviation string `json:"abbr"`
}
func (protocol *ProtocolSummary) ToString() string {
return fmt.Sprintf("%s?%s?%s", protocol.Name, protocol.Version, protocol.Abbreviation)
}
func GetProtocolSummary(inputString string) *ProtocolSummary {
splitted := strings.SplitN(inputString, "?", 3)
return &ProtocolSummary{
Name: splitted[0],
Version: splitted[1],
Abbreviation: splitted[2],
}
}
type Protocol struct {
Name string `json:"name"`
ProtocolSummary
LongName string `json:"longName"`
Abbreviation string `json:"abbr"`
Macro string `json:"macro"`
Version string `json:"version"`
BackgroundColor string `json:"backgroundColor"`
ForegroundColor string `json:"foregroundColor"`
FontSize int8 `json:"fontSize"`
@@ -91,7 +110,6 @@ type OutputChannelItem struct {
Timestamp int64
ConnectionInfo *ConnectionInfo
Pair *RequestResponsePair
Summary *BaseEntry
Namespace string
}
@@ -116,6 +134,7 @@ func (p *ReadProgress) Reset() {
type Dissector interface {
Register(*Extension)
GetProtocols() map[string]*Protocol
Ping()
Dissect(b *bufio.Reader, reader TcpReader, options *TrafficFilteringOptions) error
Analyze(item *OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *Entry
@@ -151,7 +170,7 @@ func (e *Emitting) Emit(item *OutputChannelItem) {
type Entry struct {
Id string `json:"id"`
Protocol Protocol `json:"proto"`
Protocol ProtocolSummary `json:"protocol"`
Capture Capture `json:"capture"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
@@ -164,40 +183,30 @@ type Entry struct {
RequestSize int `json:"requestSize"`
ResponseSize int `json:"responseSize"`
ElapsedTime int64 `json:"elapsedTime"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type EntryWrapper struct {
Protocol Protocol `json:"protocol"`
Representation string `json:"representation"`
Data *Entry `json:"data"`
Base *BaseEntry `json:"base"`
Rules []map[string]interface{} `json:"rulesMatched,omitempty"`
IsRulesEnabled bool `json:"isRulesEnabled"`
Protocol Protocol `json:"protocol"`
Representation string `json:"representation"`
Data *Entry `json:"data"`
Base *BaseEntry `json:"base"`
}
type BaseEntry struct {
Id string `json:"id"`
Protocol Protocol `json:"proto,omitempty"`
Capture Capture `json:"capture"`
Summary string `json:"summary,omitempty"`
SummaryQuery string `json:"summaryQuery,omitempty"`
Status int `json:"status"`
StatusQuery string `json:"statusQuery"`
Method string `json:"method,omitempty"`
MethodQuery string `json:"methodQuery,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type ApplicableRules struct {
Latency int64 `json:"latency,omitempty"`
Status bool `json:"status,omitempty"`
NumberOfRules int `json:"numberOfRules,omitempty"`
Id string `json:"id"`
Protocol Protocol `json:"proto,omitempty"`
Capture Capture `json:"capture"`
Summary string `json:"summary,omitempty"`
SummaryQuery string `json:"summaryQuery,omitempty"`
Status int `json:"status"`
StatusQuery string `json:"statusQuery"`
Method string `json:"method,omitempty"`
MethodQuery string `json:"methodQuery,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency"`
}
const (

View File

@@ -1,7 +1,5 @@
package api
type TrafficFilteringOptions struct {
IgnoredUserAgents []string
PlainTextMaskingRegexes []*SerializableRegexp
EnableRedaction bool
IgnoredUserAgents []string
}

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect11/amqp/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect16/amqp/\* expect

View File

@@ -4,16 +4,20 @@ go 1.17
require (
github.com/stretchr/testify v1.7.0
github.com/up9inc/mizu/logger v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
)
require (
github.com/davecgh/go-spew v1.1.0 // indirect
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/up9inc/mizu/tap/dbgctl v0.0.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c // indirect
)
replace github.com/up9inc/mizu/logger v0.0.0 => ../../../logger
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api
replace github.com/up9inc/mizu/tap/dbgctl v0.0.0 => ../../dbgctl

View File

@@ -1,5 +1,7 @@
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 h1:lDH9UUVJtmYCjyT0CI4q8xvlXPxeZ0gYCVvWbmPlp88=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=

View File

@@ -5,8 +5,8 @@ import (
"fmt"
"sort"
"strconv"
"time"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/tap/api"
)
@@ -25,14 +25,14 @@ var connectionMethodMap = map[int]string{
61: "connection unblocked",
}
// var channelMethodMap = map[int]string{
// 10: "channel open",
// 11: "channel open-ok",
// 20: "channel flow",
// 21: "channel flow-ok",
// 40: "channel close",
// 41: "channel close-ok",
// }
var channelMethodMap = map[int]string{
10: "channel open",
11: "channel open-ok",
20: "channel flow",
21: "channel flow-ok",
40: "channel close",
41: "channel close-ok",
}
var exchangeMethodMap = map[int]string{
10: "exchange declare",
@@ -94,29 +94,41 @@ type AMQPWrapper struct {
Details interface{} `json:"details"`
}
func emitAMQP(event interface{}, _type string, method string, connectionInfo *api.ConnectionInfo, captureTime time.Time, captureSize int, emitter api.Emitter, capture api.Capture) {
request := &api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: event,
},
},
type emptyResponse struct {
}
const emptyMethod = "empty"
func getIdent(reader api.TcpReader, methodFrame *MethodFrame) (ident string) {
tcpID := reader.GetTcpID()
// To match methods to their Ok(s)
methodId := methodFrame.MethodId - methodFrame.MethodId%10
if reader.GetIsClient() {
ident = fmt.Sprintf(
"%s_%s_%s_%s_%d_%d_%d",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
methodFrame.ChannelId,
methodFrame.ClassId,
methodId,
)
} else {
ident = fmt.Sprintf(
"%s_%s_%s_%s_%d_%d_%d",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
methodFrame.ChannelId,
methodFrame.ClassId,
methodId,
)
}
item := &api.OutputChannelItem{
Protocol: protocol,
Capture: capture,
Timestamp: captureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: connectionInfo,
Pair: &api.RequestResponsePair{
Request: *request,
Response: api.GenericMessage{},
},
}
emitter.Emit(item)
return
}
func representProperties(properties map[string]interface{}, rep []interface{}) ([]interface{}, string, string) {
@@ -460,6 +472,36 @@ func representQueueDeclare(event map[string]interface{}) []interface{} {
return rep
}
func representQueueDeclareOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Queue",
Value: event["queue"].(string),
Selector: `response.queue`,
},
{
Name: "Message Count",
Value: fmt.Sprintf("%g", event["messageCount"].(float64)),
Selector: `response.messageCount`,
},
{
Name: "Consumer Count",
Value: fmt.Sprintf("%g", event["consumerCount"].(float64)),
Selector: `response.consumerCount`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representExchangeDeclare(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
@@ -571,7 +613,7 @@ func representConnectionStart(event map[string]interface{}) []interface{} {
x, _ := json.Marshal(value)
outcome = string(x)
default:
panic("Unknown data type for the server property!")
logger.Log.Info("Unknown data type for the server property!")
}
headers = append(headers, api.TableData{
Name: name,
@@ -593,6 +635,65 @@ func representConnectionStart(event map[string]interface{}) []interface{} {
return rep
}
func representConnectionStartOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Mechanism",
Value: event["mechanism"].(string),
Selector: `response.mechanism`,
},
{
Name: "Mechanism",
Value: event["mechanism"].(string),
Selector: `response.response`,
},
{
Name: "Locale",
Value: event["locale"].(string),
Selector: `response.locale`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
if event["clientProperties"] != nil {
headers := make([]api.TableData, 0)
for name, value := range event["clientProperties"].(map[string]interface{}) {
var outcome string
switch v := value.(type) {
case string:
outcome = v
case map[string]interface{}:
x, _ := json.Marshal(value)
outcome = string(x)
default:
logger.Log.Info("Unknown data type for the client property!")
}
headers = append(headers, api.TableData{
Name: name,
Value: outcome,
Selector: fmt.Sprintf(`response.clientProperties["%s"]`, name),
})
}
sort.Slice(headers, func(i, j int) bool {
return headers[i].Name < headers[j].Name
})
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Client Properties",
Data: string(headersMarshaled),
})
}
return rep
}
func representConnectionClose(event map[string]interface{}) []interface{} {
replyCode := ""
@@ -750,3 +851,122 @@ func representBasicConsume(event map[string]interface{}) []interface{} {
return rep
}
func representBasicConsumeOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Consumer Tag",
Value: event["consumerTag"].(string),
Selector: `response.consumerTag`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representConnectionOpen(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Virtual Host",
Value: event["virtualHost"].(string),
Selector: `request.virtualHost`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representConnectionTune(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Channel Max",
Value: fmt.Sprintf("%g", event["channelMax"].(float64)),
Selector: `request.channelMax`,
},
{
Name: "Frame Max",
Value: fmt.Sprintf("%g", event["frameMax"].(float64)),
Selector: `request.frameMax`,
},
{
Name: "Heartbeat",
Value: fmt.Sprintf("%g", event["heartbeat"].(float64)),
Selector: `request.heartbeat`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representBasicCancel(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Consumer Tag",
Value: event["consumerTag"].(string),
Selector: `response.consumerTag`,
},
{
Name: "NoWait",
Value: strconv.FormatBool(event["noWait"].(bool)),
Selector: `request.noWait`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representBasicCancelOk(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]api.TableData{
{
Name: "Consumer Tag",
Value: event["consumerTag"].(string),
Selector: `response.consumerTag`,
},
})
rep = append(rep, api.SectionData{
Type: api.TABLE,
Title: "Details",
Data: string(details),
})
return rep
}
func representEmpty(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
return rep
}

View File

@@ -13,11 +13,13 @@ import (
)
var protocol = api.Protocol{
Name: "amqp",
ProtocolSummary: api.ProtocolSummary{
Name: "amqp",
Version: "0-9-1",
Abbreviation: "AMQP",
},
LongName: "Advanced Message Queuing Protocol 0-9-1",
Abbreviation: "AMQP",
Macro: "amqp",
Version: "0-9-1",
BackgroundColor: "#ff6600",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -26,32 +28,30 @@ var protocol = api.Protocol{
Priority: 1,
}
var protocolsMap = map[string]*api.Protocol{
protocol.ToString(): &protocol,
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", protocol.Name)
}
const amqpRequest string = "amqp_request"
func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.TrafficFilteringOptions) error {
r := AmqpReader{b}
var remaining int
var header *HeaderFrame
connectionInfo := &api.ConnectionInfo{
ClientIP: reader.GetTcpID().SrcIP,
ClientPort: reader.GetTcpID().SrcPort,
ServerIP: reader.GetTcpID().DstIP,
ServerPort: reader.GetTcpID().DstPort,
IsOutgoing: true,
}
eventBasicPublish := &BasicPublish{
Exchange: "",
RoutingKey: "",
@@ -73,6 +73,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
var lastMethodFrameMessage Message
var ident string
isClient := reader.GetIsClient()
reqResMatcher := reader.GetReqResMatcher().(*requestResponseMatcher)
for {
frameVal, err := r.readFrame()
if err == io.EOF {
@@ -111,16 +115,22 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
switch lastMethodFrameMessage.(type) {
case *BasicPublish:
eventBasicPublish.Body = f.Body
emitAMQP(*eventBasicPublish, amqpRequest, basicMethodMap[40], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[40], *eventBasicPublish, reader)
reqResMatcher.emitEvent(!isClient, ident, emptyMethod, &emptyResponse{}, reader)
case *BasicDeliver:
eventBasicDeliver.Body = f.Body
emitAMQP(*eventBasicDeliver, amqpRequest, basicMethodMap[60], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(!isClient, ident, basicMethodMap[60], *eventBasicDeliver, reader)
reqResMatcher.emitEvent(isClient, ident, emptyMethod, &emptyResponse{}, reader)
}
case *MethodFrame:
reader.GetParent().SetProtocol(&protocol)
lastMethodFrameMessage = f.Method
ident = getIdent(reader, f)
switch m := f.Method.(type) {
case *BasicPublish:
eventBasicPublish.Exchange = m.Exchange
@@ -136,7 +146,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventQueueBind, amqpRequest, queueMethodMap[20], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[20], *eventQueueBind, reader)
case *QueueBindOk:
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[21], m, reader)
case *BasicConsume:
eventBasicConsume := &BasicConsume{
@@ -148,7 +161,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventBasicConsume, amqpRequest, basicMethodMap[20], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[20], *eventBasicConsume, reader)
case *BasicConsumeOk:
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[21], m, reader)
case *BasicDeliver:
eventBasicDeliver.ConsumerTag = m.ConsumerTag
@@ -167,7 +183,10 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventQueueDeclare, amqpRequest, queueMethodMap[10], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[10], *eventQueueDeclare, reader)
case *QueueDeclareOk:
reqResMatcher.emitEvent(isClient, ident, queueMethodMap[11], m, reader)
case *ExchangeDeclare:
eventExchangeDeclare := &ExchangeDeclare{
@@ -180,17 +199,19 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventExchangeDeclare, amqpRequest, exchangeMethodMap[10], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, exchangeMethodMap[10], *eventExchangeDeclare, reader)
case *ExchangeDeclareOk:
reqResMatcher.emitEvent(isClient, ident, exchangeMethodMap[11], m, reader)
case *ConnectionStart:
eventConnectionStart := &ConnectionStart{
VersionMajor: m.VersionMajor,
VersionMinor: m.VersionMinor,
ServerProperties: m.ServerProperties,
Mechanisms: m.Mechanisms,
Locales: m.Locales,
}
emitAMQP(*eventConnectionStart, amqpRequest, connectionMethodMap[10], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
// In our tests, *ConnectionStart does not result in *ConnectionStartOk
reqResMatcher.emitEvent(!isClient, ident, connectionMethodMap[10], m, reader)
reqResMatcher.emitEvent(isClient, ident, emptyMethod, &emptyResponse{}, reader)
case *ConnectionStartOk:
// In our tests, *ConnectionStart does not result in *ConnectionStartOk
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[11], m, reader)
case *ConnectionClose:
eventConnectionClose := &ConnectionClose{
@@ -199,7 +220,40 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
ClassId: m.ClassId,
MethodId: m.MethodId,
}
emitAMQP(*eventConnectionClose, amqpRequest, connectionMethodMap[50], connectionInfo, reader.GetCaptureTime(), reader.GetReadProgress().Current(), reader.GetEmitter(), reader.GetParent().GetOrigin())
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[50], *eventConnectionClose, reader)
case *ConnectionCloseOk:
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[51], m, reader)
case *connectionOpen:
eventConnectionOpen := &connectionOpen{
VirtualHost: m.VirtualHost,
}
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[40], *eventConnectionOpen, reader)
case *connectionOpenOk:
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[41], m, reader)
case *channelOpen:
reqResMatcher.emitEvent(isClient, ident, channelMethodMap[10], m, reader)
case *channelOpenOk:
reqResMatcher.emitEvent(isClient, ident, channelMethodMap[11], m, reader)
case *connectionTune:
// In our tests, *connectionTune does not result in *connectionTuneOk
reqResMatcher.emitEvent(!isClient, ident, connectionMethodMap[30], m, reader)
reqResMatcher.emitEvent(isClient, ident, emptyMethod, &emptyResponse{}, reader)
case *connectionTuneOk:
// In our tests, *connectionTune does not result in *connectionTuneOk
reqResMatcher.emitEvent(isClient, ident, connectionMethodMap[31], m, reader)
case *basicCancel:
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[30], m, reader)
case *basicCancelOk:
reqResMatcher.emitEvent(isClient, ident, basicMethodMap[31], m, reader)
}
default:
@@ -210,11 +264,19 @@ func (d dissecting) Dissect(b *bufio.Reader, reader api.TcpReader, options *api.
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *api.Entry {
request := item.Pair.Request.Payload.(map[string]interface{})
response := item.Pair.Response.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
elapsedTime = 0
}
reqDetails["method"] = request["method"]
resDetails["method"] = response["method"]
return &api.Entry{
Protocol: protocol,
Protocol: protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -226,13 +288,15 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
RequestSize: item.Pair.Request.CaptureSize,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: 0,
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
RequestSize: item.Pair.Request.CaptureSize,
ResponseSize: item.Pair.Response.CaptureSize,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: elapsedTime,
}
}
@@ -273,11 +337,26 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
case basicMethodMap[20]:
summary = entry.Request["queue"].(string)
summaryQuery = fmt.Sprintf(`request.queue == "%s"`, summary)
case connectionMethodMap[40]:
summary = entry.Request["virtualHost"].(string)
summaryQuery = fmt.Sprintf(`request.virtualHost == "%s"`, summary)
case connectionMethodMap[30]:
summary = fmt.Sprintf("%g", entry.Request["channelMax"].(float64))
summaryQuery = fmt.Sprintf(`request.channelMax == "%s"`, summary)
case connectionMethodMap[31]:
summary = fmt.Sprintf("%g", entry.Request["channelMax"].(float64))
summaryQuery = fmt.Sprintf(`request.channelMax == "%s"`, summary)
case basicMethodMap[30]:
summary = entry.Request["consumerTag"].(string)
summaryQuery = fmt.Sprintf(`request.consumerTag == "%s"`, summary)
case basicMethodMap[31]:
summary = entry.Request["consumerTag"].(string)
summaryQuery = fmt.Sprintf(`request.consumerTag == "%s"`, summary)
}
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -290,13 +369,14 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
func (d dissecting) Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, err error) {
representation := make(map[string]interface{})
var repRequest []interface{}
var repResponse []interface{}
switch request["method"].(string) {
case basicMethodMap[40]:
repRequest = representBasicPublish(request)
@@ -314,20 +394,56 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
repRequest = representQueueBind(request)
case basicMethodMap[20]:
repRequest = representBasicConsume(request)
case connectionMethodMap[40]:
repRequest = representConnectionOpen(request)
case channelMethodMap[10]:
repRequest = representEmpty(request)
case connectionMethodMap[30]:
repRequest = representConnectionTune(request)
case basicMethodMap[30]:
repRequest = representBasicCancel(request)
}
switch response["method"].(string) {
case queueMethodMap[11]:
repResponse = representQueueDeclareOk(response)
case exchangeMethodMap[11]:
repResponse = representEmpty(response)
case connectionMethodMap[11]:
repResponse = representConnectionStartOk(response)
case connectionMethodMap[51]:
repResponse = representEmpty(response)
case basicMethodMap[21]:
repResponse = representBasicConsumeOk(response)
case queueMethodMap[21]:
repResponse = representEmpty(response)
case connectionMethodMap[41]:
repResponse = representEmpty(response)
case channelMethodMap[11]:
repResponse = representEmpty(request)
case connectionMethodMap[31]:
repResponse = representConnectionTune(request)
case basicMethodMap[31]:
repResponse = representBasicCancelOk(request)
case emptyMethod:
repResponse = representEmpty(response)
}
representation["request"] = repRequest
representation["response"] = repResponse
object, err = json.Marshal(representation)
return
}
func (d dissecting) Macros() map[string]string {
return map[string]string{
`amqp`: fmt.Sprintf(`proto.name == "%s"`, protocol.Name),
`amqp`: fmt.Sprintf(`protocol.name == "%s"`, protocol.Name),
}
}
func (d dissecting) NewResponseRequestMatcher() api.RequestResponseMatcher {
return nil
return createResponseRequestMatcher()
}
var Dissector dissecting

View File

@@ -44,7 +44,7 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"amqp": `proto.name == "amqp"`,
"amqp": `protocol.name == "amqp"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -0,0 +1,113 @@
package amqp
import (
"sync"
"time"
"github.com/up9inc/mizu/tap/api"
)
// Key is {client_addr}_{client_port}_{dest_addr}_{dest_port}_{channel_id}_{class_id}_{method_id}
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func createResponseRequestMatcher() api.RequestResponseMatcher {
return &requestResponseMatcher{openMessagesMap: &sync.Map{}}
}
func (matcher *requestResponseMatcher) GetMap() *sync.Map {
return matcher.openMessagesMap
}
func (matcher *requestResponseMatcher) SetMaxTry(value int) {
}
func (matcher *requestResponseMatcher) emitEvent(isRequest bool, ident string, method string, event interface{}, reader api.TcpReader) {
reader.GetParent().SetProtocol(&protocol)
var item *api.OutputChannelItem
if isRequest {
item = matcher.registerRequest(ident, method, event, reader.GetCaptureTime(), reader.GetReadProgress().Current())
} else {
item = matcher.registerResponse(ident, method, event, reader.GetCaptureTime(), reader.GetReadProgress().Current())
}
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: reader.GetTcpID().SrcIP,
ClientPort: reader.GetTcpID().SrcPort,
ServerIP: reader.GetTcpID().DstIP,
ServerPort: reader.GetTcpID().DstPort,
IsOutgoing: true,
}
item.Capture = reader.GetParent().GetOrigin()
reader.GetEmitter().Emit(item)
}
}
func (matcher *requestResponseMatcher) registerRequest(ident string, method string, request interface{}, captureTime time.Time, captureSize int) *api.OutputChannelItem {
requestAMQPMessage := api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
CaptureSize: captureSize,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: request,
},
},
}
if response, found := matcher.openMessagesMap.LoadAndDelete(ident); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
responseAMQPMessage := response.(*api.GenericMessage)
if responseAMQPMessage.IsRequest {
return nil
}
return matcher.preparePair(&requestAMQPMessage, responseAMQPMessage)
}
matcher.openMessagesMap.Store(ident, &requestAMQPMessage)
return nil
}
func (matcher *requestResponseMatcher) registerResponse(ident string, method string, response interface{}, captureTime time.Time, captureSize int) *api.OutputChannelItem {
responseAMQPMessage := api.GenericMessage{
IsRequest: false,
CaptureTime: captureTime,
CaptureSize: captureSize,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: response,
},
},
}
if request, found := matcher.openMessagesMap.LoadAndDelete(ident); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
requestAMQPMessage := request.(*api.GenericMessage)
if !requestAMQPMessage.IsRequest {
return nil
}
return matcher.preparePair(requestAMQPMessage, &responseAMQPMessage)
}
matcher.openMessagesMap.Store(ident, &responseAMQPMessage)
return nil
}
func (matcher *requestResponseMatcher) preparePair(requestAMQPMessage *api.GenericMessage, responseAMQPMessage *api.GenericMessage) *api.OutputChannelItem {
return &api.OutputChannelItem{
Protocol: protocol,
Timestamp: requestAMQPMessage.CaptureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: nil,
Pair: &api.RequestResponsePair{
Request: *requestAMQPMessage,
Response: *responseAMQPMessage,
},
}
}

View File

@@ -81,10 +81,10 @@ func (msg *ConnectionStart) read(r io.Reader) (err error) {
}
type ConnectionStartOk struct {
ClientProperties Table
Mechanism string
Response string
Locale string
ClientProperties Table `json:"clientProperties"`
Mechanism string `json:"mechanism"`
Response string `json:"response"`
Locale string `json:"locale"`
}
func (msg *ConnectionStartOk) read(r io.Reader) (err error) {
@@ -135,9 +135,9 @@ func (msg *connectionSecureOk) read(r io.Reader) (err error) {
}
type connectionTune struct {
ChannelMax uint16
FrameMax uint32
Heartbeat uint16
ChannelMax uint16 `json:"channelMax"`
FrameMax uint32 `json:"frameMax"`
Heartbeat uint16 `json:"heartbeat"`
}
func (msg *connectionTune) read(r io.Reader) (err error) {
@@ -181,7 +181,7 @@ func (msg *connectionTuneOk) read(r io.Reader) (err error) {
}
type connectionOpen struct {
VirtualHost string
VirtualHost string `json:"virtualHost"`
reserved1 string
reserved2 bool
}
@@ -580,9 +580,9 @@ func (msg *QueueDeclare) read(r io.Reader) (err error) {
}
type QueueDeclareOk struct {
Queue string
MessageCount uint32
ConsumerCount uint32
Queue string `json:"queue"`
MessageCount uint32 `json:"messageCount"`
ConsumerCount uint32 `json:"consumerCount"`
}
func (msg *QueueDeclareOk) read(r io.Reader) (err error) {
@@ -840,7 +840,7 @@ func (msg *BasicConsume) read(r io.Reader) (err error) {
}
type BasicConsumeOk struct {
ConsumerTag string
ConsumerTag string `json:"consumerTag"`
}
func (msg *BasicConsumeOk) read(r io.Reader) (err error) {
@@ -853,8 +853,8 @@ func (msg *BasicConsumeOk) read(r io.Reader) (err error) {
}
type basicCancel struct {
ConsumerTag string
NoWait bool
ConsumerTag string `json:"consumerTag"`
NoWait bool `json:"noWait"`
}
func (msg *basicCancel) read(r io.Reader) (err error) {
@@ -873,7 +873,7 @@ func (msg *basicCancel) read(r io.Reader) (err error) {
}
type basicCancelOk struct {
ConsumerTag string
ConsumerTag string `json:"consumerTag"`
}
func (msg *basicCancelOk) read(r io.Reader) (err error) {

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect12/http/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect16/http/\* expect

View File

@@ -18,10 +18,6 @@ func filterAndEmit(item *api.OutputChannelItem, emitter api.Emitter, options *ap
return
}
if options.EnableRedaction {
FilterSensitiveData(item, options)
}
replaceForwardedFor(item)
emitter.Emit(item)

View File

@@ -6,13 +6,16 @@ import (
"reflect"
"sort"
"strconv"
"strings"
"github.com/up9inc/mizu/tap/api"
)
func mapSliceRebuildAsMap(mapSlice []interface{}) (newMap map[string]interface{}) {
newMap = make(map[string]interface{})
for _, item := range mapSlice {
mergedMapSlice := mapSliceMergeRepeatedKeys(mapSlice)
for _, item := range mergedMapSlice {
h := item.(map[string]interface{})
newMap[h["name"].(string)] = h["value"]
}
@@ -20,6 +23,28 @@ func mapSliceRebuildAsMap(mapSlice []interface{}) (newMap map[string]interface{}
return
}
func mapSliceRebuildAsMergedMap(mapSlice []interface{}) (newMap map[string]interface{}) {
newMap = make(map[string]interface{})
mergedMapSlice := mapSliceMergeRepeatedKeys(mapSlice)
for _, item := range mergedMapSlice {
h := item.(map[string]interface{})
if valuesInterface, ok := h["value"].([]interface{}); ok {
var values []string
for _, valueInterface := range valuesInterface {
values = append(values, valueInterface.(string))
}
newMap[h["name"].(string)] = strings.Join(values, ",")
} else {
newMap[h["name"].(string)] = h["value"]
}
}
return
}
func mapSliceMergeRepeatedKeys(mapSlice []interface{}) (newMapSlice []interface{}) {
newMapSlice = make([]interface{}, 0)
valuesMap := make(map[string][]interface{})
@@ -47,6 +72,24 @@ func mapSliceMergeRepeatedKeys(mapSlice []interface{}) (newMapSlice []interface{
return
}
func representMapAsTable(mapToTable map[string]interface{}, selectorPrefix string) (representation string) {
var table []api.TableData
keys := make([]string, 0, len(mapToTable))
for k := range mapToTable {
keys = append(keys, k)
}
sort.Strings(keys)
for _, key := range keys {
table = append(table, createTableForKey(key, mapToTable[key], selectorPrefix)...)
}
obj, _ := json.Marshal(table)
representation = string(obj)
return
}
func representMapSliceAsTable(mapSlice []interface{}, selectorPrefix string) (representation string) {
var table []api.TableData
for _, item := range mapSlice {
@@ -54,34 +97,7 @@ func representMapSliceAsTable(mapSlice []interface{}, selectorPrefix string) (re
key := h["name"].(string)
value := h["value"]
var reflectKind reflect.Kind
reflectType := reflect.TypeOf(value)
if reflectType == nil {
reflectKind = reflect.Interface
} else {
reflectKind = reflect.TypeOf(value).Kind()
}
switch reflectKind {
case reflect.Slice:
fallthrough
case reflect.Array:
for i, el := range value.([]interface{}) {
selector := fmt.Sprintf("%s.%s[%d]", selectorPrefix, key, i)
table = append(table, api.TableData{
Name: fmt.Sprintf("%s [%d]", key, i),
Value: el,
Selector: selector,
})
}
default:
selector := fmt.Sprintf("%s[\"%s\"]", selectorPrefix, key)
table = append(table, api.TableData{
Name: key,
Value: value,
Selector: selector,
})
}
table = append(table, createTableForKey(key, value, selectorPrefix)...)
}
obj, _ := json.Marshal(table)
@@ -89,6 +105,41 @@ func representMapSliceAsTable(mapSlice []interface{}, selectorPrefix string) (re
return
}
func createTableForKey(key string, value interface{}, selectorPrefix string) []api.TableData {
var table []api.TableData
var reflectKind reflect.Kind
reflectType := reflect.TypeOf(value)
if reflectType == nil {
reflectKind = reflect.Interface
} else {
reflectKind = reflect.TypeOf(value).Kind()
}
switch reflectKind {
case reflect.Slice:
fallthrough
case reflect.Array:
for i, el := range value.([]interface{}) {
selector := fmt.Sprintf("%s.%s[%d]", selectorPrefix, key, i)
table = append(table, api.TableData{
Name: fmt.Sprintf("%s [%d]", key, i),
Value: el,
Selector: selector,
})
}
default:
selector := fmt.Sprintf("%s[\"%s\"]", selectorPrefix, key)
table = append(table, api.TableData{
Name: key,
Value: value,
Selector: selector,
})
}
return table
}
func representSliceAsTable(slice []interface{}, selectorPrefix string) (representation string) {
var table []api.TableData
for i, item := range slice {

View File

@@ -15,11 +15,13 @@ import (
)
var http10protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "1.0",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.0",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.0",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -29,11 +31,13 @@ var http10protocol = api.Protocol{
}
var http11protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "HTTP",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Macro: "http",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -43,11 +47,13 @@ var http11protocol = api.Protocol{
}
var http2Protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "2.0",
Abbreviation: "HTTP/2",
},
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2)",
Abbreviation: "HTTP/2",
Macro: "http2",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -57,11 +63,13 @@ var http2Protocol = api.Protocol{
}
var grpcProtocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "2.0",
Abbreviation: "gRPC",
},
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) [ gRPC over HTTP/2 ]",
Abbreviation: "gRPC",
Macro: "grpc",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -71,11 +79,13 @@ var grpcProtocol = api.Protocol{
}
var graphQL1Protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "1.1",
Abbreviation: "GQL",
},
LongName: "Hypertext Transfer Protocol -- HTTP/1.1 [ GraphQL over HTTP/1.1 ]",
Abbreviation: "GQL",
Macro: "gql",
Version: "1.1",
BackgroundColor: "#e10098",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -85,11 +95,13 @@ var graphQL1Protocol = api.Protocol{
}
var graphQL2Protocol = api.Protocol{
Name: "http",
ProtocolSummary: api.ProtocolSummary{
Name: "http",
Version: "2.0",
Abbreviation: "GQL",
},
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) [ GraphQL over HTTP/2 ]",
Abbreviation: "GQL",
Macro: "gql",
Version: "2.0",
BackgroundColor: "#e10098",
ForegroundColor: "#ffffff",
FontSize: 12,
@@ -98,6 +110,15 @@ var graphQL2Protocol = api.Protocol{
Priority: 0,
}
var protocolsMap = map[string]*api.Protocol{
http10protocol.ToString(): &http10protocol,
http11protocol.ToString(): &http11protocol,
http2Protocol.ToString(): &http2Protocol,
grpcProtocol.ToString(): &grpcProtocol,
graphQL1Protocol.ToString(): &graphQL1Protocol,
graphQL2Protocol.ToString(): &graphQL2Protocol,
}
const (
TypeHttpRequest = iota
TypeHttpResponse
@@ -109,6 +130,10 @@ func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &http11protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", http11protocol.Name)
}
@@ -261,19 +286,13 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
reqDetails["pathSegments"] = strings.Split(path, "/")[1:]
// Rearrange the maps for the querying
reqDetails["_headers"] = reqDetails["headers"]
reqDetails["headers"] = mapSliceRebuildAsMap(reqDetails["_headers"].([]interface{}))
resDetails["_headers"] = resDetails["headers"]
resDetails["headers"] = mapSliceRebuildAsMap(resDetails["_headers"].([]interface{}))
reqDetails["headers"] = mapSliceRebuildAsMergedMap(reqDetails["headers"].([]interface{}))
resDetails["headers"] = mapSliceRebuildAsMergedMap(resDetails["headers"].([]interface{}))
reqDetails["_cookies"] = reqDetails["cookies"]
reqDetails["cookies"] = mapSliceRebuildAsMap(reqDetails["_cookies"].([]interface{}))
resDetails["_cookies"] = resDetails["cookies"]
resDetails["cookies"] = mapSliceRebuildAsMap(resDetails["_cookies"].([]interface{}))
reqDetails["cookies"] = mapSliceRebuildAsMergedMap(reqDetails["cookies"].([]interface{}))
resDetails["cookies"] = mapSliceRebuildAsMergedMap(resDetails["cookies"].([]interface{}))
reqDetails["_queryString"] = reqDetails["queryString"]
reqDetails["_queryStringMerged"] = mapSliceMergeRepeatedKeys(reqDetails["_queryString"].([]interface{}))
reqDetails["queryString"] = mapSliceRebuildAsMap(reqDetails["_queryStringMerged"].([]interface{}))
reqDetails["queryString"] = mapSliceRebuildAsMap(reqDetails["queryString"].([]interface{}))
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
@@ -281,7 +300,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
}
return &api.Entry{
Protocol: item.Protocol,
Protocol: item.Protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -315,7 +334,7 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -328,7 +347,6 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
@@ -373,19 +391,19 @@ func representRequest(request map[string]interface{}) (repRequest []interface{})
repRequest = append(repRequest, api.SectionData{
Type: api.TABLE,
Title: "Headers",
Data: representMapSliceAsTable(request["_headers"].([]interface{}), `request.headers`),
Data: representMapAsTable(request["headers"].(map[string]interface{}), `request.headers`),
})
repRequest = append(repRequest, api.SectionData{
Type: api.TABLE,
Title: "Cookies",
Data: representMapSliceAsTable(request["_cookies"].([]interface{}), `request.cookies`),
Data: representMapAsTable(request["cookies"].(map[string]interface{}), `request.cookies`),
})
repRequest = append(repRequest, api.SectionData{
Type: api.TABLE,
Title: "Query String",
Data: representMapSliceAsTable(request["_queryStringMerged"].([]interface{}), `request.queryString`),
Data: representMapAsTable(request["queryString"].(map[string]interface{}), `request.queryString`),
})
postData, _ := request["postData"].(map[string]interface{})
@@ -461,13 +479,13 @@ func representResponse(response map[string]interface{}) (repResponse []interface
repResponse = append(repResponse, api.SectionData{
Type: api.TABLE,
Title: "Headers",
Data: representMapSliceAsTable(response["_headers"].([]interface{}), `response.headers`),
Data: representMapAsTable(response["headers"].(map[string]interface{}), `response.headers`),
})
repResponse = append(repResponse, api.SectionData{
Type: api.TABLE,
Title: "Cookies",
Data: representMapSliceAsTable(response["_cookies"].([]interface{}), `response.cookies`),
Data: representMapAsTable(response["cookies"].(map[string]interface{}), `response.cookies`),
})
content, _ := response["content"].(map[string]interface{})
@@ -503,10 +521,10 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
func (d dissecting) Macros() map[string]string {
return map[string]string{
`http`: fmt.Sprintf(`proto.name == "%s" and proto.version.startsWith("%c")`, http11protocol.Name, http11protocol.Version[0]),
`http2`: fmt.Sprintf(`proto.name == "%s" and proto.version == "%s"`, http11protocol.Name, http2Protocol.Version),
`grpc`: fmt.Sprintf(`proto.name == "%s" and proto.version == "%s" and proto.macro == "%s"`, http11protocol.Name, grpcProtocol.Version, grpcProtocol.Macro),
`gql`: fmt.Sprintf(`proto.name == "%s" and proto.macro == "%s"`, graphQL1Protocol.Name, graphQL1Protocol.Macro),
`http`: fmt.Sprintf(`protocol.abbr == "%s"`, http11protocol.Abbreviation),
`http2`: fmt.Sprintf(`protocol.abbr == "%s"`, http2Protocol.Abbreviation),
`grpc`: fmt.Sprintf(`protocol.abbr == "%s"`, grpcProtocol.Abbreviation),
`gql`: fmt.Sprintf(`protocol.abbr == "%s"`, graphQL1Protocol.Abbreviation),
}
}

View File

@@ -44,10 +44,10 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"http": `proto.name == "http" and proto.version.startsWith("1")`,
"http2": `proto.name == "http" and proto.version == "2.0"`,
"grpc": `proto.name == "http" and proto.version == "2.0" and proto.macro == "grpc"`,
"gql": `proto.name == "http" and proto.macro == "gql"`,
"http": `protocol.abbr == "HTTP"`,
"http2": `protocol.abbr == "HTTP/2"`,
"grpc": `protocol.abbr == "gRPC"`,
"gql": `protocol.abbr == "GQL"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -1,30 +1,14 @@
package http
import (
"bytes"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/beevik/etree"
"github.com/up9inc/mizu/tap/api"
)
const maskedFieldPlaceholderValue = "[REDACTED]"
const userAgent = "user-agent"
//these values MUST be all lower case and contain no `-` or `_` characters
var personallyIdentifiableDataFields = []string{"token", "authorization", "authentication", "cookie", "userid", "password",
"username", "user", "key", "passcode", "pass", "auth", "authtoken", "jwt",
"bearer", "clientid", "clientsecret", "redirecturi", "phonenumber",
"zip", "zipcode", "address", "country", "firstname", "lastname",
"middlename", "fname", "lname", "birthdate"}
func IsIgnoredUserAgent(item *api.OutputChannelItem, options *api.TrafficFilteringOptions) bool {
if item.Protocol.Name != "http" {
return false
@@ -48,192 +32,3 @@ func IsIgnoredUserAgent(item *api.OutputChannelItem, options *api.TrafficFilteri
return false
}
func FilterSensitiveData(item *api.OutputChannelItem, options *api.TrafficFilteringOptions) {
request := item.Pair.Request.Payload.(HTTPPayload).Data.(*http.Request)
response := item.Pair.Response.Payload.(HTTPPayload).Data.(*http.Response)
filterHeaders(&request.Header)
filterHeaders(&response.Header)
filterUrl(request.URL)
filterRequestBody(request, options)
filterResponseBody(response, options)
}
func filterRequestBody(request *http.Request, options *api.TrafficFilteringOptions) {
contenType := getContentTypeHeaderValue(request.Header)
body, err := ioutil.ReadAll(request.Body)
if err != nil {
return
}
filteredBody, err := filterHttpBody(body, contenType, options)
if err == nil {
request.Body = ioutil.NopCloser(bytes.NewBuffer(filteredBody))
} else {
request.Body = ioutil.NopCloser(bytes.NewBuffer(body))
}
}
func filterResponseBody(response *http.Response, options *api.TrafficFilteringOptions) {
contentType := getContentTypeHeaderValue(response.Header)
body, err := ioutil.ReadAll(response.Body)
if err != nil {
return
}
filteredBody, err := filterHttpBody(body, contentType, options)
if err == nil {
response.Body = ioutil.NopCloser(bytes.NewBuffer(filteredBody))
} else {
response.Body = ioutil.NopCloser(bytes.NewBuffer(body))
}
}
func filterHeaders(headers *http.Header) {
for key := range *headers {
if strings.ToLower(key) == userAgent {
continue
}
if strings.ToLower(key) == "cookie" {
headers.Del(key)
} else if isFieldNameSensitive(key) {
headers.Set(key, maskedFieldPlaceholderValue)
}
}
}
func getContentTypeHeaderValue(headers http.Header) string {
for key := range headers {
if strings.ToLower(key) == "content-type" {
return headers.Get(key)
}
}
return ""
}
func isFieldNameSensitive(fieldName string) bool {
if fieldName == ":authority" {
return false
}
name := strings.ToLower(fieldName)
name = strings.ReplaceAll(name, "_", "")
name = strings.ReplaceAll(name, "-", "")
name = strings.ReplaceAll(name, " ", "")
for _, sensitiveField := range personallyIdentifiableDataFields {
if strings.Contains(name, sensitiveField) {
return true
}
}
return false
}
func filterHttpBody(bytes []byte, contentType string, options *api.TrafficFilteringOptions) ([]byte, error) {
mimeType := strings.Split(contentType, ";")[0]
switch strings.ToLower(mimeType) {
case "application/json":
return filterJsonBody(bytes)
case "text/html":
fallthrough
case "application/xhtml+xml":
fallthrough
case "text/xml":
fallthrough
case "application/xml":
return filterXmlEtree(bytes)
case "text/plain":
if options != nil && options.PlainTextMaskingRegexes != nil {
return filterPlainText(bytes, options), nil
}
}
return bytes, nil
}
func filterPlainText(bytes []byte, options *api.TrafficFilteringOptions) []byte {
for _, regex := range options.PlainTextMaskingRegexes {
bytes = regex.ReplaceAll(bytes, []byte(maskedFieldPlaceholderValue))
}
return bytes
}
func filterXmlEtree(bytes []byte) ([]byte, error) {
if !IsValidXML(bytes) {
return nil, errors.New("Invalid XML")
}
xmlDoc := etree.NewDocument()
err := xmlDoc.ReadFromBytes(bytes)
if err != nil {
return nil, err
} else {
filterXmlElement(xmlDoc.Root())
}
return xmlDoc.WriteToBytes()
}
func IsValidXML(data []byte) bool {
return xml.Unmarshal(data, new(interface{})) == nil
}
func filterXmlElement(element *etree.Element) {
for i, attribute := range element.Attr {
if isFieldNameSensitive(attribute.Key) {
element.Attr[i].Value = maskedFieldPlaceholderValue
}
}
if element.ChildElements() == nil || len(element.ChildElements()) == 0 {
if isFieldNameSensitive(element.Tag) {
element.SetText(maskedFieldPlaceholderValue)
}
} else {
for _, element := range element.ChildElements() {
filterXmlElement(element)
}
}
}
func filterJsonBody(bytes []byte) ([]byte, error) {
var bodyJsonMap map[string]interface{}
err := json.Unmarshal(bytes, &bodyJsonMap)
if err != nil {
return nil, err
}
filterJsonMap(bodyJsonMap)
return json.Marshal(bodyJsonMap)
}
func filterJsonMap(jsonMap map[string]interface{}) {
for key, value := range jsonMap {
// Do not replace nil values with maskedFieldPlaceholderValue
if value == nil {
continue
}
nestedMap, isNested := value.(map[string]interface{})
if isNested {
filterJsonMap(nestedMap)
} else {
if isFieldNameSensitive(key) {
jsonMap[key] = maskedFieldPlaceholderValue
}
}
}
}
func filterUrl(url *url.URL) {
if len(url.RawQuery) > 0 {
newQueryArgs := make([]string, 0)
for urlQueryParamName, urlQueryParamValues := range url.Query() {
newValues := urlQueryParamValues
if isFieldNameSensitive(urlQueryParamName) {
newValues = []string{maskedFieldPlaceholderValue}
}
for _, paramValue := range newValues {
newQueryArgs = append(newQueryArgs, fmt.Sprintf("%s=%s", urlQueryParamName, paramValue))
}
}
url.RawQuery = strings.Join(newQueryArgs, "&")
}
}

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect11/kafka/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect15/kafka/\* expect

View File

@@ -11,11 +11,13 @@ import (
)
var _protocol = api.Protocol{
Name: "kafka",
ProtocolSummary: api.ProtocolSummary{
Name: "kafka",
Version: "12",
Abbreviation: "KAFKA",
},
LongName: "Apache Kafka Protocol",
Abbreviation: "KAFKA",
Macro: "kafka",
Version: "12",
BackgroundColor: "#000000",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -24,12 +26,20 @@ var _protocol = api.Protocol{
Priority: 2,
}
var protocolsMap = map[string]*api.Protocol{
_protocol.ToString(): &_protocol,
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &_protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", _protocol.Name)
}
@@ -62,7 +72,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
elapsedTime = 0
}
return &api.Entry{
Protocol: _protocol,
Protocol: _protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -187,7 +197,7 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -200,7 +210,6 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
@@ -243,7 +252,7 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
func (d dissecting) Macros() map[string]string {
return map[string]string{
`kafka`: fmt.Sprintf(`proto.name == "%s"`, _protocol.Name),
`kafka`: fmt.Sprintf(`protocol.name == "%s"`, _protocol.Name),
}
}

View File

@@ -44,7 +44,7 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"kafka": `proto.name == "kafka"`,
"kafka": `protocol.name == "kafka"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect11/redis/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect15/redis/\* expect

View File

@@ -11,11 +11,13 @@ import (
)
var protocol = api.Protocol{
Name: "redis",
ProtocolSummary: api.ProtocolSummary{
Name: "redis",
Version: "3.x",
Abbreviation: "REDIS",
},
LongName: "Redis Serialization Protocol",
Abbreviation: "REDIS",
Macro: "redis",
Version: "3.x",
BackgroundColor: "#a41e11",
ForegroundColor: "#ffffff",
FontSize: 11,
@@ -24,12 +26,20 @@ var protocol = api.Protocol{
Priority: 3,
}
var protocolsMap = map[string]*api.Protocol{
protocol.ToString(): &protocol,
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &protocol
}
func (d dissecting) GetProtocols() map[string]*api.Protocol {
return protocolsMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", protocol.Name)
}
@@ -70,7 +80,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
elapsedTime = 0
}
return &api.Entry{
Protocol: protocol,
Protocol: protocol.ProtocolSummary,
Capture: item.Capture,
Source: &api.TCP{
Name: resolvedSource,
@@ -115,7 +125,7 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
return &api.BaseEntry{
Id: entry.Id,
Protocol: entry.Protocol,
Protocol: *protocolsMap[entry.Protocol.ToString()],
Capture: entry.Capture,
Summary: summary,
SummaryQuery: summaryQuery,
@@ -128,7 +138,6 @@ func (d dissecting) Summarize(entry *api.Entry) *api.BaseEntry {
Destination: entry.Destination,
IsOutgoing: entry.Outgoing,
Latency: entry.ElapsedTime,
Rules: entry.Rules,
}
}
@@ -144,7 +153,7 @@ func (d dissecting) Represent(request map[string]interface{}, response map[strin
func (d dissecting) Macros() map[string]string {
return map[string]string{
`redis`: fmt.Sprintf(`proto.name == "%s"`, protocol.Name),
`redis`: fmt.Sprintf(`protocol.name == "%s"`, protocol.Name),
}
}

View File

@@ -45,7 +45,7 @@ func TestRegister(t *testing.T) {
func TestMacros(t *testing.T) {
expectedMacros := map[string]string{
"redis": `proto.name == "redis"`,
"redis": `protocol.name == "redis"`,
}
dissector := NewDissector()
macros := dissector.Macros()

View File

@@ -4,11 +4,12 @@ go 1.17
require (
github.com/Masterminds/semver v1.5.0
github.com/cilium/ebpf v0.8.1
github.com/cilium/ebpf v0.9.0
github.com/go-errors/errors v1.4.2
github.com/google/gopacket v1.1.19
github.com/hashicorp/golang-lru v0.5.4
github.com/knightsc/gapstone v0.0.0-20191231144527-6fa5afaf11a9
github.com/moby/moby v20.10.17+incompatible
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/struCoder/pidusage v0.2.1
github.com/up9inc/mizu/logger v0.0.0
@@ -28,6 +29,7 @@ require (
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/tklauser/go-sysconf v0.3.10 // indirect
github.com/tklauser/numcpus v0.4.0 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
@@ -36,6 +38,7 @@ require (
golang.org/x/text v0.3.7 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gotest.tools/v3 v3.3.0 // indirect
k8s.io/apimachinery v0.23.3 // indirect
k8s.io/klog/v2 v2.40.1 // indirect
k8s.io/utils v0.0.0-20220127004650-9b3446523e65 // indirect

View File

@@ -7,8 +7,8 @@ github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbt
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cilium/ebpf v0.8.1 h1:bLSSEbBLqGPXxls55pGr5qWZaTqcmfDJHhou7t254ao=
github.com/cilium/ebpf v0.8.1/go.mod h1:f5zLIM0FSNuAkSyLAN7X+Hy6yznlF1mNiWUMfxMtrgk=
github.com/cilium/ebpf v0.9.0 h1:ldiV+FscPCQ/p3mNEV4O02EPbUZJFsoEtHvIr9xLTvk=
github.com/cilium/ebpf v0.9.0/go.mod h1:+OhNOIXx/Fnu1IE8bJz2dzOA+VSfyTfdNUVdlQnxUFY=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -94,6 +94,8 @@ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/moby v20.10.17+incompatible h1:TJJfyk2fLEgK+RzqVpFNkDkm0oEi+MLUfwt9lEYnp5g=
github.com/moby/moby v20.10.17+incompatible/go.mod h1:fDXVQ6+S340veQPv35CzDahGBmHsiclFwfEygB/TWMc=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
@@ -123,11 +125,15 @@ github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBO
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
@@ -186,12 +192,14 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -221,6 +229,7 @@ golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapK
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -269,6 +278,8 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.3.0 h1:MfDY1b1/0xN1CyMlQDac0ziEy9zJQd9CXBRRDHw2jJo=
gotest.tools/v3 v3.3.0/go.mod h1:Mcr9QNxkg0uMvy/YElmo4SpXgJKWgQvYrT7Kw5RzJ1A=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/api v0.23.3 h1:KNrME8KHGr12Ozjf8ytOewKzZh6hl/hHUZeHddT3a38=

View File

@@ -85,7 +85,7 @@ func NewTcpAssembler(outputItems chan *api.OutputChannelItem, streamsMap api.Tcp
maxBufferedPagesTotal := GetMaxBufferedPagesPerConnection()
maxBufferedPagesPerConnection := GetMaxBufferedPagesTotal()
logger.Log.Infof("Assembler options: maxBufferedPagesTotal=%d, maxBufferedPagesPerConnection=%d, opts=%v",
logger.Log.Infof("Assembler options: maxBufferedPagesTotal=%d, maxBufferedPagesPerConnection=%d, opts=%+v",
maxBufferedPagesTotal, maxBufferedPagesPerConnection, opts)
a.Assembler.AssemblerOptions.MaxBufferedPagesTotal = maxBufferedPagesTotal
a.Assembler.AssemblerOptions.MaxBufferedPagesPerConnection = maxBufferedPagesPerConnection

View File

@@ -9,7 +9,7 @@ docker build -t mizu-ebpf-builder . || exit 1
BPF_TARGET=amd64
BPF_CFLAGS="-O2 -g -D__TARGET_ARCH_x86"
ARCH=$(uname -m)
if [[ $ARCH == "aarch64" ]]; then
if [[ $ARCH == "aarch64" || $ARCH == "arm64" ]]; then
BPF_TARGET=arm64
BPF_CFLAGS="-O2 -g -D__TARGET_ARCH_arm64"
fi
@@ -18,10 +18,10 @@ docker run --rm \
--name mizu-ebpf-builder \
-v $MIZU_HOME:/mizu \
-v $(go env GOPATH):/root/go \
-it mizu-ebpf-builder \
mizu-ebpf-builder \
sh -c "
BPF_TARGET=\"$BPF_TARGET\" BPF_CFLAGS=\"$BPF_CFLAGS\" go generate tap/tlstapper/tls_tapper.go
chown $(id -u):$(id -g) tap/tlstapper/tlstapper_bpf*
chown $(id -u):$(id -g) tap/tlstapper/tlstapper*_bpf*
" || exit 1
popd

View File

@@ -12,7 +12,7 @@ Copyright (C) UP9 Inc.
#include "include/common.h"
static __always_inline int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd) {
static __always_inline int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd, struct ssl_info* info) {
__u32 pid = id >> 32;
__u64 key = (__u64) pid << 32 | fd;
@@ -22,14 +22,29 @@ static __always_inline int add_address_to_chunk(struct pt_regs *ctx, struct tls_
return 0;
}
int err = bpf_probe_read(chunk->address, sizeof(chunk->address), fdinfo->ipv4_addr);
chunk->flags |= (fdinfo->flags & FLAGS_IS_CLIENT_BIT);
int err;
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FD_ADDRESS, id, err, 0l);
return 0;
switch (info->address_info.mode) {
case ADDRESS_INFO_MODE_UNDEFINED:
chunk->address_info.mode = ADDRESS_INFO_MODE_SINGLE;
err = bpf_probe_read(&chunk->address_info.sport, sizeof(chunk->address_info.sport), &fdinfo->ipv4_addr[2]);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FD_ADDRESS, id, err, 0l);
return 0;
}
err = bpf_probe_read(&chunk->address_info.saddr, sizeof(chunk->address_info.saddr), &fdinfo->ipv4_addr[4]);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FD_ADDRESS, id, err, 0l);
return 0;
}
break;
default:
bpf_probe_read(&chunk->address_info, sizeof(chunk->address_info), &info->address_info);
}
chunk->flags |= (fdinfo->flags & FLAGS_IS_CLIENT_BIT);
return 1;
}
@@ -104,7 +119,7 @@ static __always_inline void output_ssl_chunk(struct pt_regs *ctx, struct ssl_inf
chunk->len = count_bytes;
chunk->fd = info->fd;
if (!add_address_to_chunk(ctx, chunk, id, chunk->fd)) {
if (!add_address_to_chunk(ctx, chunk, id, chunk->fd, info)) {
// Without an address, we drop the chunk because there is not much to do with it in Go
//
return;

View File

@@ -35,11 +35,12 @@ using `bpf_probe_read` calls in `go_crypto_tls_get_fd_from_tcp_conn` function.
SOURCES:
Tracing Go Functions with eBPF (before 1.17): https://www.grant.pizza/blog/tracing-go-functions-with-ebpf-part-2/
Tracing Go Functions with eBPF (<=1.16): https://www.grant.pizza/blog/tracing-go-functions-with-ebpf-part-2/
Challenges of BPF Tracing Go: https://blog.0x74696d.com/posts/challenges-of-bpf-tracing-go/
x86 calling conventions: https://en.wikipedia.org/wiki/X86_calling_conventions
Plan 9 from Bell Labs: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
The issue for calling convention change in Go: https://github.com/golang/go/issues/40724
Go ABI0 (<=1.16) specification: https://go.dev/doc/asm
Proposal of Register-based Go calling convention: https://go.googlesource.com/proposal/+/master/design/40724-register-calling.md
Go internal ABI (1.17) specification: https://go.googlesource.com/go/+/refs/heads/dev.regabi/src/cmd/compile/internal-abi.md
Go internal ABI (current) specification: https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md
@@ -55,10 +56,60 @@ Capstone Engine: https://www.capstone-engine.org/
#include "include/logger_messages.h"
#include "include/pids.h"
#include "include/common.h"
#include "include/go_abi_0.h"
#include "include/go_abi_internal.h"
#include "include/go_types.h"
static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *ctx) {
// TODO: cilium/ebpf does not support .kconfig Therefore; for now, we build object files per kernel version.
// Error: reference to .kconfig: not supported
// See: https://github.com/cilium/ebpf/issues/698
// extern int LINUX_KERNEL_VERSION __kconfig;
enum ABI {
ABI0=0,
ABIInternal=1,
};
#if defined(bpf_target_x86)
// get_goid_from_thread_local_storage function is x86 specific
static __always_inline __u32 get_goid_from_thread_local_storage(__u64 *goroutine_id) {
int zero = 0;
int one = 1;
struct goid_offsets* offsets = bpf_map_lookup_elem(&goid_offsets_map, &zero);
if (offsets == NULL) {
return 0;
}
// Get the task that currently assigned to this thread.
struct task_struct *task = (struct task_struct*) bpf_get_current_task();
if (task == NULL) {
return 0;
}
// Read task->thread
struct thread_struct *thr;
bpf_probe_read(&thr, sizeof(thr), &task->thread);
// Read task->thread.fsbase
u64 fsbase;
#ifdef KERNEL_BEFORE_4_6
// TODO: if (LINUX_KERNEL_VERSION <= KERNEL_VERSION(4, 6, 0)) {
fsbase = BPF_CORE_READ((struct thread_struct___v46 *)thr, fs);
#else
fsbase = BPF_CORE_READ(thr, fsbase);
#endif
// Get the Goroutine ID (goid) which is stored in thread-local storage.
size_t g_addr;
bpf_probe_read_user(&g_addr, sizeof(void *), (void*)(fsbase + offsets->g_addr_offset));
bpf_probe_read_user(goroutine_id, sizeof(void *), (void*)(g_addr + offsets->goid_offset));
return 1;
}
#endif
static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *ctx, enum ABI abi) {
struct go_interface conn;
long err;
__u64 addr;
@@ -67,8 +118,15 @@ static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *
if (err != 0) {
return invalid_fd;
}
#else
addr = GO_ABI_INTERNAL_PT_REGS_R1(ctx);
#elif defined(bpf_target_x86)
if (abi == ABI0) {
err = bpf_probe_read(&addr, sizeof(addr), (void*)GO_ABI_INTERNAL_PT_REGS_SP(ctx)+0x8);
if (err != 0) {
return invalid_fd;
}
} else {
addr = GO_ABI_INTERNAL_PT_REGS_R1(ctx);
}
#endif
err = bpf_probe_read(&conn, sizeof(conn), (void*)addr);
@@ -91,7 +149,7 @@ static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *
return fd;
}
static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context) {
static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context, enum ABI abi) {
__u64 pid_tgid = bpf_get_current_pid_tgid();
__u64 pid = pid_tgid >> 32;
if (!should_tap(pid)) {
@@ -107,14 +165,52 @@ static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
#else
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R2(ctx);
#elif defined(bpf_target_x86)
if (abi == ABI0) {
err = bpf_probe_read(&info.buffer_len, sizeof(__u32), (void*)GO_ABI_0_PT_REGS_SP(ctx)+0x18);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
} else {
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R2(ctx);
}
#endif
info.buffer = (void*)GO_ABI_INTERNAL_PT_REGS_R4(ctx);
info.fd = go_crypto_tls_get_fd_from_tcp_conn(ctx);
// GO_ABI_INTERNAL_PT_REGS_GP is Goroutine address
__u64 pid_fp = pid << 32 | GO_ABI_INTERNAL_PT_REGS_GP(ctx);
#if defined(bpf_target_x86)
if (abi == ABI0) {
err = bpf_probe_read(&info.buffer, sizeof(__u32), (void*)GO_ABI_0_PT_REGS_SP(ctx)+0x11);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FROM_SSL_BUFFER, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
// We basically add 00 suffix to the hex address.
info.buffer = (void*)((long)info.buffer << 8);
} else {
#endif
info.buffer = (void*)GO_ABI_INTERNAL_PT_REGS_R4(ctx);
#if defined(bpf_target_x86)
}
#endif
info.fd = go_crypto_tls_get_fd_from_tcp_conn(ctx, abi);
__u64 goroutine_id;
if (abi == ABI0) {
#if defined(bpf_target_arm64)
// In case of ABI0 and arm64, it's stored in the Goroutine register
goroutine_id = GO_ABI_0_PT_REGS_GP(ctx);
#elif defined(bpf_target_x86)
// In case of ABI0 and amd64, it's stored in the thread-local storage
int status = get_goid_from_thread_local_storage(&goroutine_id);
if (!status) {
return;
}
#endif
} else {
// GO_ABI_INTERNAL_PT_REGS_GP is the Goroutine address in ABIInternal
goroutine_id = GO_ABI_INTERNAL_PT_REGS_GP(ctx);
}
__u64 pid_fp = pid << 32 | goroutine_id;
err = bpf_map_update_elem(go_context, &pid_fp, &info, BPF_ANY);
if (err != 0) {
@@ -124,15 +220,30 @@ static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf
return;
}
static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context, __u32 flags) {
static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context, __u32 flags, enum ABI abi) {
__u64 pid_tgid = bpf_get_current_pid_tgid();
__u64 pid = pid_tgid >> 32;
if (!should_tap(pid)) {
return;
}
// GO_ABI_INTERNAL_PT_REGS_GP is Goroutine address
__u64 pid_fp = pid << 32 | GO_ABI_INTERNAL_PT_REGS_GP(ctx);
__u64 goroutine_id;
if (abi == ABI0) {
#if defined(bpf_target_arm64)
// In case of ABI0 and arm64, it's stored in the Goroutine register
goroutine_id = GO_ABI_0_PT_REGS_GP(ctx);
#elif defined(bpf_target_x86)
// In case of ABI0 and amd64, it's stored in the thread-local storage
int status = get_goid_from_thread_local_storage(&goroutine_id);
if (!status) {
return;
}
#endif
} else {
// GO_ABI_INTERNAL_PT_REGS_GP is the Goroutine address in ABIInternal
goroutine_id = GO_ABI_INTERNAL_PT_REGS_GP(ctx);
}
__u64 pid_fp = pid << 32 | goroutine_id;
struct ssl_info *info_ptr = bpf_map_lookup_elem(go_context, &pid_fp);
if (info_ptr == NULL) {
@@ -156,8 +267,17 @@ static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct
return;
}
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R7(ctx); // n in return n, nil
#else
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R1(ctx); // n in return n, nil
#elif defined(bpf_target_x86)
if (abi == ABI0) {
// n in return n, nil
err = bpf_probe_read(&info.buffer_len, sizeof(__u32), (void*)GO_ABI_0_PT_REGS_SP(ctx)+0x28);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
return;
}
} else {
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R1(ctx); // n in return n, nil
}
#endif
// This check achieves ignoring 0 length reads (the reads result with an error)
if (info.buffer_len <= 0) {
@@ -170,22 +290,50 @@ static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct
return;
}
SEC("uprobe/go_crypto_tls_write")
void BPF_KPROBE(go_crypto_tls_write) {
go_crypto_tls_uprobe(ctx, &go_write_context);
SEC("uprobe/go_crypto_tls_abi0_write")
int BPF_KPROBE(go_crypto_tls_abi0_write) {
go_crypto_tls_uprobe(ctx, &go_write_context, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_write_ex")
void BPF_KPROBE(go_crypto_tls_write_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_write_context, 0);
SEC("uprobe/go_crypto_tls_abi0_write_ex")
int BPF_KPROBE(go_crypto_tls_abi0_write_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_write_context, 0, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_read")
void BPF_KPROBE(go_crypto_tls_read) {
go_crypto_tls_uprobe(ctx, &go_read_context);
SEC("uprobe/go_crypto_tls_abi0_read")
int BPF_KPROBE(go_crypto_tls_abi0_read) {
go_crypto_tls_uprobe(ctx, &go_read_context, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_read_ex")
void BPF_KPROBE(go_crypto_tls_read_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_read_context, FLAGS_IS_READ_BIT);
SEC("uprobe/go_crypto_tls_abi0_read_ex")
int BPF_KPROBE(go_crypto_tls_abi0_read_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_read_context, FLAGS_IS_READ_BIT, ABI0);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_write")
int BPF_KPROBE(go_crypto_tls_abi_internal_write) {
go_crypto_tls_uprobe(ctx, &go_write_context, ABIInternal);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_write_ex")
int BPF_KPROBE(go_crypto_tls_abi_internal_write_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_write_context, 0, ABIInternal);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_read")
int BPF_KPROBE(go_crypto_tls_abi_internal_read) {
go_crypto_tls_uprobe(ctx, &go_read_context, ABIInternal);
return 1;
}
SEC("uprobe/go_crypto_tls_abi_internal_read_ex")
int BPF_KPROBE(go_crypto_tls_abi_internal_read_ex) {
go_crypto_tls_ex_uprobe(ctx, &go_read_context, FLAGS_IS_READ_BIT, ABIInternal);
return 1;
}

View File

@@ -7,9 +7,11 @@ Copyright (C) UP9 Inc.
#ifndef __COMMON__
#define __COMMON__
#define AF_INET 2 /* Internet IP Protocol */
const __s32 invalid_fd = -1;
static int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd);
static int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd, struct ssl_info* info);
static void send_chunk_part(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tls_chunk* chunk, int start, int end);
static void send_chunk(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tls_chunk* chunk);
static void output_ssl_chunk(struct pt_regs *ctx, struct ssl_info* info, int count_bytes, __u64 id, __u32 flags);

View File

@@ -0,0 +1,52 @@
/*
Note: This file is licenced differently from the rest of the project
SPDX-License-Identifier: GPL-2.0
Copyright (C) UP9 Inc.
*/
#ifndef __GO_ABI_0__
#define __GO_ABI_0__
/*
Go ABI0 (<=1.16) specification
https://go.dev/doc/asm
Since ABI0 is a stack-based calling convention we only need the stack pointer and
if it's applicable the Goroutine pointer
*/
#include "target_arch.h"
#if defined(bpf_target_x86)
#ifdef __i386__
#define GO_ABI_0_PT_REGS_SP(x) ((x)->esp)
#else
#define GO_ABI_0_PT_REGS_SP(x) ((x)->sp)
#endif
#elif defined(bpf_target_arm)
#define GO_ABI_0_PT_REGS_SP(x) ((x)->uregs[13])
#define GO_ABI_0_PT_REGS_GP(x) ((x)->uregs[10])
#elif defined(bpf_target_arm64)
/* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */
struct pt_regs;
#define PT_REGS_ARM64 const volatile struct user_pt_regs
#define GO_ABI_0_PT_REGS_SP(x) (((PT_REGS_ARM64 *)(x))->sp)
#define GO_ABI_0_PT_REGS_GP(x) (((PT_REGS_ARM64 *)(x))->regs[18])
#elif defined(bpf_target_powerpc)
#define GO_ABI_0_PT_REGS_SP(x) ((x)->sp)
#define GO_ABI_0_PT_REGS_GP(x) ((x)->gpr[30])
#endif
#endif /* __GO_ABI_0__ */

View File

@@ -8,54 +8,11 @@ Copyright (C) UP9 Inc.
#define __GO_ABI_INTERNAL__
/*
Go internal ABI specification
Go internal ABI (1.17/current) specification
https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md
*/
/* Scan the ARCH passed in from ARCH env variable */
#if defined(__TARGET_ARCH_x86)
#define bpf_target_x86
#define bpf_target_defined
#elif defined(__TARGET_ARCH_s390)
#define bpf_target_s390
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm)
#define bpf_target_arm
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm64)
#define bpf_target_arm64
#define bpf_target_defined
#elif defined(__TARGET_ARCH_mips)
#define bpf_target_mips
#define bpf_target_defined
#elif defined(__TARGET_ARCH_powerpc)
#define bpf_target_powerpc
#define bpf_target_defined
#elif defined(__TARGET_ARCH_sparc)
#define bpf_target_sparc
#define bpf_target_defined
#else
#undef bpf_target_defined
#endif
/* Fall back to what the compiler says */
#ifndef bpf_target_defined
#if defined(__x86_64__)
#define bpf_target_x86
#elif defined(__s390__)
#define bpf_target_s390
#elif defined(__arm__)
#define bpf_target_arm
#elif defined(__aarch64__)
#define bpf_target_arm64
#elif defined(__mips__)
#define bpf_target_mips
#elif defined(__powerpc__)
#define bpf_target_powerpc
#elif defined(__sparc__)
#define bpf_target_sparc
#endif
#endif
#include "target_arch.h"
#if defined(bpf_target_x86)
@@ -78,15 +35,15 @@ https://github.com/golang/go/blob/go1.17.6/src/cmd/compile/internal/ssa/gen/AMD6
#else
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->rax)
#define GO_ABI_INTERNAL_PT_REGS_R2(x) ((x)->rcx)
#define GO_ABI_INTERNAL_PT_REGS_R3(x) ((x)->rdx)
#define GO_ABI_INTERNAL_PT_REGS_R4(x) ((x)->rbx)
#define GO_ABI_INTERNAL_PT_REGS_R5(x) ((x)->rbp)
#define GO_ABI_INTERNAL_PT_REGS_R6(x) ((x)->rsi)
#define GO_ABI_INTERNAL_PT_REGS_R7(x) ((x)->rdi)
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->rsp)
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->rbp)
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->ax)
#define GO_ABI_INTERNAL_PT_REGS_R2(x) ((x)->cx)
#define GO_ABI_INTERNAL_PT_REGS_R3(x) ((x)->dx)
#define GO_ABI_INTERNAL_PT_REGS_R4(x) ((x)->bx)
#define GO_ABI_INTERNAL_PT_REGS_R5(x) ((x)->bp)
#define GO_ABI_INTERNAL_PT_REGS_R6(x) ((x)->si)
#define GO_ABI_INTERNAL_PT_REGS_R7(x) ((x)->di)
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->sp)
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->bp)
#define GO_ABI_INTERNAL_PT_REGS_GP(x) ((x)->r14)
#endif

View File

@@ -8,9 +8,16 @@ Copyright (C) UP9 Inc.
#define __HEADERS__
#include <stddef.h>
#include <linux/bpf.h>
#include <linux/ptrace.h>
#include "target_arch.h"
#include "vmlinux_x86.h"
#include "vmlinux_arm64.h"
#include "legacy_kernel.h"
#include <bpf/bpf_endian.h>
#include <bpf/bpf_helpers.h>
#include "bpf/bpf_tracing.h"
#include <bpf/bpf_tracing.h>
#include <bpf/bpf_core_read.h>
#endif /* __HEADERS__ */

View File

@@ -0,0 +1,50 @@
#ifndef __LEGACY_KERNEL_H__
#define __LEGACY_KERNEL_H__
#if defined(bpf_target_x86)
struct thread_struct___v46 {
struct desc_struct tls_array[3];
unsigned long sp0;
unsigned long sp;
unsigned short es;
unsigned short ds;
unsigned short fsindex;
unsigned short gsindex;
unsigned long fs;
unsigned long gs;
struct perf_event ptrace_bps[4];
unsigned long debugreg6;
unsigned long ptrace_dr7;
unsigned long cr2;
unsigned long trap_nr;
unsigned long error_code;
unsigned long io_bitmap_ptr;
unsigned long iopl;
unsigned io_bitmap_max;
long: 63;
long: 64;
long: 64;
long: 64;
long: 64;
long: 64;
struct fpu fpu;
};
#elif defined(bpf_target_arm)
// Commented out since thread_struct is not used in ARM64 yet.
// struct thread_struct___v46 {
// struct cpu_context cpu_context;
// long: 64;
// unsigned long tp_value;
// struct fpsimd_state fpsimd_state;
// unsigned long fault_address;
// unsigned long fault_code;
// struct debug_info debug;
// };
#endif
#endif /* __LEGACY_KERNEL_H__ */

View File

@@ -26,6 +26,11 @@ Copyright (C) UP9 Inc.
#define LOG_ERROR_PUTTING_CONNECT_INFO (14)
#define LOG_ERROR_GETTING_CONNECT_INFO (15)
#define LOG_ERROR_READING_CONNECT_INFO (16)
#define LOG_ERROR_READING_SOCKET_FAMILY (17)
#define LOG_ERROR_READING_SOCKET_DADDR (18)
#define LOG_ERROR_READING_SOCKET_SADDR (19)
#define LOG_ERROR_READING_SOCKET_DPORT (20)
#define LOG_ERROR_READING_SOCKET_SPORT (21)
// Sometimes we have the same error, happening from different locations.
// in order to be able to distinct between them in the log, we add an

View File

@@ -24,6 +24,21 @@ Copyright (C) UP9 Inc.
//
// Be careful when editing, alignment and padding should be exactly the same in go/c.
//
typedef enum {
ADDRESS_INFO_MODE_UNDEFINED,
ADDRESS_INFO_MODE_SINGLE,
ADDRESS_INFO_MODE_PAIR,
} address_info_mode;
struct address_info {
address_info_mode mode;
__be32 saddr;
__be32 daddr;
__be16 sport;
__be16 dport;
};
struct tls_chunk {
__u32 pid;
__u32 tgid;
@@ -32,7 +47,7 @@ struct tls_chunk {
__u32 recorded;
__u32 fd;
__u32 flags;
__u8 address[16];
struct address_info address_info;
__u8 data[CHUNK_SIZE]; // Must be N^2
};
@@ -41,6 +56,7 @@ struct ssl_info {
__u32 buffer_len;
__u32 fd;
__u64 created_at_nano;
struct address_info address_info;
// for ssl_write and ssl_read must be zero
// for ssl_write_ex and ssl_read_ex save the *written/*readbytes pointer.
@@ -53,6 +69,13 @@ struct fd_info {
__u8 flags;
};
struct goid_offsets {
__u64 g_addr_offset;
__u64 goid_offset;
};
const struct goid_offsets *unused __attribute__((unused));
// Heap-like area for eBPF programs - stack size limited to 512 bytes, we must use maps for bigger (chunk) objects.
//
struct {
@@ -91,6 +114,7 @@ BPF_LRU_HASH(openssl_write_context, __u64, struct ssl_info);
BPF_LRU_HASH(openssl_read_context, __u64, struct ssl_info);
// Go specific
BPF_HASH(goid_offsets_map, __u32, struct goid_offsets);
BPF_LRU_HASH(go_write_context, __u64, struct ssl_info);
BPF_LRU_HASH(go_read_context, __u64, struct ssl_info);

View File

@@ -0,0 +1,55 @@
/*
Note: This file is licenced differently from the rest of the project
SPDX-License-Identifier: GPL-2.0
Copyright (C) UP9 Inc.
*/
#ifndef __TARGET_ARCH__
#define __TARGET_ARCH__
/* Scan the ARCH passed in from ARCH env variable */
#if defined(__TARGET_ARCH_x86)
#define bpf_target_x86
#define bpf_target_defined
#elif defined(__TARGET_ARCH_s390)
#define bpf_target_s390
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm)
#define bpf_target_arm
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm64)
#define bpf_target_arm64
#define bpf_target_defined
#elif defined(__TARGET_ARCH_mips)
#define bpf_target_mips
#define bpf_target_defined
#elif defined(__TARGET_ARCH_powerpc)
#define bpf_target_powerpc
#define bpf_target_defined
#elif defined(__TARGET_ARCH_sparc)
#define bpf_target_sparc
#define bpf_target_defined
#else
#undef bpf_target_defined
#endif
/* Fall back to what the compiler says */
#ifndef bpf_target_defined
#if defined(__x86_64__)
#define bpf_target_x86
#elif defined(__s390__)
#define bpf_target_s390
#elif defined(__arm__)
#define bpf_target_arm
#elif defined(__aarch64__)
#define bpf_target_arm64
#elif defined(__mips__)
#define bpf_target_mips
#elif defined(__powerpc__)
#define bpf_target_powerpc
#elif defined(__sparc__)
#define bpf_target_sparc
#endif
#endif
#endif /* __TARGET_ARCH__ */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -42,19 +42,20 @@ static __always_inline int get_count_bytes(struct pt_regs *ctx, struct ssl_info*
}
static __always_inline void ssl_uprobe(struct pt_regs *ctx, void* ssl, void* buffer, int num, struct bpf_map_def* map_fd, size_t *count_ptr) {
long err;
__u64 id = bpf_get_current_pid_tgid();
if (!should_tap(id >> 32)) {
return;
}
struct ssl_info *infoPtr = bpf_map_lookup_elem(map_fd, &id);
struct ssl_info info = lookup_ssl_info(ctx, &openssl_write_context, id);
struct ssl_info info = lookup_ssl_info(ctx, map_fd, id);
info.count_ptr = count_ptr;
info.buffer = buffer;
long err = bpf_map_update_elem(map_fd, &id, &info, BPF_ANY);
err = bpf_map_update_elem(map_fd, &id, &info, BPF_ANY);
if (err != 0) {
log_error(ctx, LOG_ERROR_PUTTING_SSL_CONTEXT, id, err, 0l);
@@ -67,7 +68,7 @@ static __always_inline void ssl_uretprobe(struct pt_regs *ctx, struct bpf_map_de
if (!should_tap(id >> 32)) {
return;
}
struct ssl_info *infoPtr = bpf_map_lookup_elem(map_fd, &id);
if (infoPtr == NULL) {
@@ -100,10 +101,10 @@ static __always_inline void ssl_uretprobe(struct pt_regs *ctx, struct bpf_map_de
return;
}
int count_bytes = get_count_bytes(ctx, &info, id);
if (count_bytes <= 0) {
return;
}
int count_bytes = get_count_bytes(ctx, &info, id);
if (count_bytes <= 0) {
return;
}
output_ssl_chunk(ctx, &info, count_bytes, id, flags);
}

View File

@@ -0,0 +1,79 @@
#include "include/headers.h"
#include "include/maps.h"
#include "include/log.h"
#include "include/logger_messages.h"
#include "include/pids.h"
#include "include/common.h"
static __always_inline void tcp_kprobe(struct pt_regs *ctx, struct bpf_map_def *map_fd, _Bool is_send) {
long err;
__u64 id = bpf_get_current_pid_tgid();
__u32 pid = id >> 32;
if (!should_tap(id >> 32)) {
return;
}
struct ssl_info *info_ptr = bpf_map_lookup_elem(map_fd, &id);
// Happens when the connection is not tls
if (info_ptr == NULL) {
return;
}
struct sock *sk = (struct sock *) PT_REGS_PARM1(ctx);
short unsigned int family;
err = bpf_probe_read(&family, sizeof(family), (void *)&sk->__sk_common.skc_family);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SOCKET_FAMILY, id, err, 0l);
return;
}
if (family != AF_INET) {
return;
}
// daddr, saddr and dport are in network byte order (big endian)
// sport is in host byte order
__be32 saddr;
__be32 daddr;
__be16 dport;
__u16 sport;
err = bpf_probe_read(&saddr, sizeof(saddr), (void *)&sk->__sk_common.skc_rcv_saddr);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SOCKET_SADDR, id, err, 0l);
return;
}
err = bpf_probe_read(&daddr, sizeof(daddr), (void *)&sk->__sk_common.skc_daddr);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SOCKET_DADDR, id, err, 0l);
return;
}
err = bpf_probe_read(&dport, sizeof(dport), (void *)&sk->__sk_common.skc_dport);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SOCKET_DPORT, id, err, 0l);
return;
}
err = bpf_probe_read(&sport, sizeof(sport), (void *)&sk->__sk_common.skc_num);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SOCKET_SPORT, id, err, 0l);
return;
}
info_ptr->address_info.mode = ADDRESS_INFO_MODE_PAIR;
info_ptr->address_info.daddr = daddr;
info_ptr->address_info.saddr = saddr;
info_ptr->address_info.dport = dport;
info_ptr->address_info.sport = bpf_htons(sport);
}
SEC("kprobe/tcp_sendmsg")
void BPF_KPROBE(tcp_sendmsg) {
tcp_kprobe(ctx, &openssl_write_context, true);
}
SEC("kprobe/tcp_recvmsg")
void BPF_KPROBE(tcp_recvmsg) {
tcp_kprobe(ctx, &openssl_read_context, false);
}

View File

@@ -15,6 +15,7 @@ Copyright (C) UP9 Inc.
//
#include "common.c"
#include "openssl_uprobes.c"
#include "tcp_kprobes.c"
#include "go_uprobes.c"
#include "fd_tracepoints.c"
#include "fd_to_address_tracepoints.c"

View File

@@ -20,4 +20,9 @@ var bpfLogMessages = []string{
/*0014*/ "[%d] Unable to put connect info [err: %d]",
/*0015*/ "[%d] Unable to get connect info",
/*0016*/ "[%d] Unable to read connect info [err: %d]",
/*0017*/ "[%d] Unable to read socket family [err: %d]",
/*0018*/ "[%d] Unable to read socket daddr [err: %d]",
/*0019*/ "[%d] Unable to read socket saddr [err: %d]",
/*0019*/ "[%d] Unable to read socket dport [err: %d]",
/*0021*/ "[%d] Unable to read socket sport [err: %d]",
}

View File

@@ -1,38 +1,33 @@
package tlstapper
import (
"bytes"
"encoding/binary"
"net"
"unsafe"
"github.com/go-errors/errors"
"github.com/up9inc/mizu/tap/api"
)
const FlagsIsClientBit uint32 = 1 << 0
const FlagsIsReadBit uint32 = 1 << 1
const (
addressInfoModeUndefined = iota
addressInfoModeSingle
addressInfoModePair
)
func (c *tlsTapperTlsChunk) getAddress() (net.IP, uint16, error) {
address := bytes.NewReader(c.Address[:])
var family uint16
var port uint16
var ip32 uint32
func (c *tlsTapperTlsChunk) getSrcAddress() (net.IP, uint16) {
ip := intToIP(c.AddressInfo.Saddr)
port := ntohs(c.AddressInfo.Sport)
if err := binary.Read(address, binary.BigEndian, &family); err != nil {
return nil, 0, errors.Wrap(err, 0)
}
return ip, port
}
if err := binary.Read(address, binary.BigEndian, &port); err != nil {
return nil, 0, errors.Wrap(err, 0)
}
func (c *tlsTapperTlsChunk) getDstAddress() (net.IP, uint16) {
ip := intToIP(c.AddressInfo.Daddr)
port := ntohs(c.AddressInfo.Dport)
if err := binary.Read(address, binary.BigEndian, &ip32); err != nil {
return nil, 0, errors.Wrap(err, 0)
}
ip := net.IP{uint8(ip32 >> 24), uint8(ip32 >> 16), uint8(ip32 >> 8), uint8(ip32)}
return ip, port, nil
return ip, port
}
func (c *tlsTapperTlsChunk) isClient() bool {
@@ -59,26 +54,54 @@ func (c *tlsTapperTlsChunk) isRequest() bool {
return (c.isClient() && c.isWrite()) || (c.isServer() && c.isRead())
}
func (c *tlsTapperTlsChunk) getAddressPair() (addressPair, error) {
ip, port, err := c.getAddress()
func (c *tlsTapperTlsChunk) getAddressPair() (addressPair, bool) {
var (
srcIp, dstIp net.IP
srcPort, dstPort uint16
full bool
)
if err != nil {
return addressPair{}, err
switch c.AddressInfo.Mode {
case addressInfoModeSingle:
if c.isRequest() {
srcIp, srcPort = api.UnknownIp, api.UnknownPort
dstIp, dstPort = c.getSrcAddress()
} else {
srcIp, srcPort = c.getSrcAddress()
dstIp, dstPort = api.UnknownIp, api.UnknownPort
}
full = false
case addressInfoModePair:
if c.isRequest() {
srcIp, srcPort = c.getSrcAddress()
dstIp, dstPort = c.getDstAddress()
} else {
srcIp, srcPort = c.getDstAddress()
dstIp, dstPort = c.getSrcAddress()
}
full = true
case addressInfoModeUndefined:
srcIp, srcPort = api.UnknownIp, api.UnknownPort
dstIp, dstPort = api.UnknownIp, api.UnknownPort
full = false
}
if c.isRequest() {
return addressPair{
srcIp: api.UnknownIp,
srcPort: api.UnknownPort,
dstIp: ip,
dstPort: port,
}, nil
} else {
return addressPair{
srcIp: ip,
srcPort: port,
dstIp: api.UnknownIp,
dstPort: api.UnknownPort,
}, nil
}
return addressPair{
srcIp: srcIp,
srcPort: srcPort,
dstIp: dstIp,
dstPort: dstPort,
}, full
}
// intToIP converts IPv4 number to net.IP
func intToIP(ip32be uint32) net.IP {
return net.IPv4(uint8(ip32be), uint8(ip32be>>8), uint8(ip32be>>16), uint8(ip32be>>24))
}
// ntohs converts big endian (network byte order) to little endian (assuming that's the host byte order)
func ntohs(i16be uint16) uint16 {
b := make([]byte, 2)
binary.BigEndian.PutUint16(b, i16be)
return *(*uint16)(unsafe.Pointer(&b[0]))
}

View File

@@ -31,9 +31,32 @@ func (s *goHooks) installUprobes(bpfObjects *tlsTapperObjects, filePath string)
func (s *goHooks) installHooks(bpfObjects *tlsTapperObjects, ex *link.Executable, offsets goOffsets) error {
var err error
goCryptoTlsWrite := bpfObjects.GoCryptoTlsAbiInternalWrite
goCryptoTlsWriteEx := bpfObjects.GoCryptoTlsAbiInternalWriteEx
goCryptoTlsRead := bpfObjects.GoCryptoTlsAbiInternalRead
goCryptoTlsReadEx := bpfObjects.GoCryptoTlsAbiInternalReadEx
if offsets.Abi == ABI0 {
goCryptoTlsWrite = bpfObjects.GoCryptoTlsAbi0Write
goCryptoTlsWriteEx = bpfObjects.GoCryptoTlsAbi0WriteEx
goCryptoTlsRead = bpfObjects.GoCryptoTlsAbi0Read
goCryptoTlsReadEx = bpfObjects.GoCryptoTlsAbi0ReadEx
// Pass goid and g struct offsets to an eBPF map to retrieve it in eBPF context
if err := bpfObjects.tlsTapperMaps.GoidOffsetsMap.Put(
uint32(0),
tlsTapperGoidOffsets{
G_addrOffset: offsets.GStructOffset,
GoidOffset: offsets.GoidOffset,
},
); err != nil {
return errors.Wrap(err, 0)
}
}
// Symbol points to
// [`crypto/tls.(*Conn).Write`](https://github.com/golang/go/blob/go1.17.6/src/crypto/tls/conn.go#L1099)
s.goWriteProbe, err = ex.Uprobe(goWriteSymbol, bpfObjects.GoCryptoTlsWrite, &link.UprobeOptions{
s.goWriteProbe, err = ex.Uprobe(goWriteSymbol, goCryptoTlsWrite, &link.UprobeOptions{
Offset: offsets.GoWriteOffset.enter,
})
@@ -42,7 +65,7 @@ func (s *goHooks) installHooks(bpfObjects *tlsTapperObjects, ex *link.Executable
}
for _, offset := range offsets.GoWriteOffset.exits {
probe, err := ex.Uprobe(goWriteSymbol, bpfObjects.GoCryptoTlsWriteEx, &link.UprobeOptions{
probe, err := ex.Uprobe(goWriteSymbol, goCryptoTlsWriteEx, &link.UprobeOptions{
Offset: offset,
})
@@ -55,7 +78,7 @@ func (s *goHooks) installHooks(bpfObjects *tlsTapperObjects, ex *link.Executable
// Symbol points to
// [`crypto/tls.(*Conn).Read`](https://github.com/golang/go/blob/go1.17.6/src/crypto/tls/conn.go#L1263)
s.goReadProbe, err = ex.Uprobe(goReadSymbol, bpfObjects.GoCryptoTlsRead, &link.UprobeOptions{
s.goReadProbe, err = ex.Uprobe(goReadSymbol, goCryptoTlsRead, &link.UprobeOptions{
Offset: offsets.GoReadOffset.enter,
})
@@ -64,7 +87,7 @@ func (s *goHooks) installHooks(bpfObjects *tlsTapperObjects, ex *link.Executable
}
for _, offset := range offsets.GoReadOffset.exits {
probe, err := ex.Uprobe(goReadSymbol, bpfObjects.GoCryptoTlsReadEx, &link.UprobeOptions{
probe, err := ex.Uprobe(goReadSymbol, goCryptoTlsReadEx, &link.UprobeOptions{
Offset: offset,
})

View File

@@ -2,8 +2,10 @@ package tlstapper
import (
"bufio"
"debug/dwarf"
"debug/elf"
"fmt"
"io"
"os"
"runtime"
@@ -13,9 +15,22 @@ import (
"github.com/up9inc/mizu/logger"
)
type goAbi int
const (
ABI0 goAbi = iota
ABIInternal
)
const PtrSize int = 8
type goOffsets struct {
GoWriteOffset *goExtendedOffset
GoReadOffset *goExtendedOffset
GoVersion string
Abi goAbi
GoidOffset uint64
GStructOffset uint64
}
type goExtendedOffset struct {
@@ -24,30 +39,33 @@ type goExtendedOffset struct {
}
const (
minimumSupportedGoVersion = "1.17.0"
goVersionSymbol = "runtime.buildVersion.str"
goWriteSymbol = "crypto/tls.(*Conn).Write"
goReadSymbol = "crypto/tls.(*Conn).Read"
minimumABIInternalGoVersion = "1.17.0"
goVersionSymbol = "runtime.buildVersion.str" // symbol does not exist in Go (<=1.16)
goWriteSymbol = "crypto/tls.(*Conn).Write"
goReadSymbol = "crypto/tls.(*Conn).Read"
)
func findGoOffsets(filePath string) (goOffsets, error) {
offsets, err := getOffsets(filePath)
offsets, goidOffset, gStructOffset, err := getOffsets(filePath)
if err != nil {
return goOffsets{}, err
}
abi := ABI0
var passed bool
var goVersion string
goVersionOffset, err := getOffset(offsets, goVersionSymbol)
if err != nil {
return goOffsets{}, err
if err == nil {
// TODO: Replace this logic with https://pkg.go.dev/debug/buildinfo#ReadFile once we upgrade to 1.18
passed, goVersion, err = checkGoVersion(filePath, goVersionOffset)
if err != nil {
return goOffsets{}, fmt.Errorf("Checking Go version: %s", err)
}
}
passed, goVersion, err := checkGoVersion(filePath, goVersionOffset)
if err != nil {
return goOffsets{}, fmt.Errorf("Checking Go version: %s", err)
}
if !passed {
return goOffsets{}, fmt.Errorf("Unsupported Go version: %s", goVersion)
if passed {
abi = ABIInternal
}
writeOffset, err := getOffset(offsets, goWriteSymbol)
@@ -63,10 +81,139 @@ func findGoOffsets(filePath string) (goOffsets, error) {
return goOffsets{
GoWriteOffset: writeOffset,
GoReadOffset: readOffset,
GoVersion: goVersion,
Abi: abi,
GoidOffset: goidOffset,
GStructOffset: gStructOffset,
}, nil
}
func getOffsets(filePath string) (offsets map[string]*goExtendedOffset, err error) {
func getSymbol(exe *elf.File, name string) *elf.Symbol {
symbols, err := exe.Symbols()
if err != nil {
return nil
}
for _, symbol := range symbols {
if symbol.Name == name {
s := symbol
return &s
}
}
return nil
}
func getGStructOffset(exe *elf.File) (gStructOffset uint64, err error) {
// This is a bit arcane. Essentially:
// - If the program is pure Go, it can do whatever it wants, and puts the G
// pointer at %fs-8 on 64 bit.
// - %Gs is the index of private storage in GDT on 32 bit, and puts the G
// pointer at -4(tls).
// - Otherwise, Go asks the external linker to place the G pointer by
// emitting runtime.tlsg, a TLS symbol, which is relocated to the chosen
// offset in libc's TLS block.
// - On ARM64 (but really, any architecture other than i386 and 86x64) the
// offset is calculate using runtime.tls_g and the formula is different.
var tls *elf.Prog
for _, prog := range exe.Progs {
if prog.Type == elf.PT_TLS {
tls = prog
break
}
}
switch exe.Machine {
case elf.EM_X86_64, elf.EM_386:
tlsg := getSymbol(exe, "runtime.tlsg")
if tlsg == nil || tls == nil {
gStructOffset = ^uint64(PtrSize) + 1 //-ptrSize
return
}
// According to https://reviews.llvm.org/D61824, linkers must pad the actual
// size of the TLS segment to ensure that (tlsoffset%align) == (vaddr%align).
// This formula, copied from the lld code, matches that.
// https://github.com/llvm-mirror/lld/blob/9aef969544981d76bea8e4d1961d3a6980980ef9/ELF/InputSection.cpp#L643
memsz := tls.Memsz + (-tls.Vaddr-tls.Memsz)&(tls.Align-1)
// The TLS register points to the end of the TLS block, which is
// tls.Memsz long. runtime.tlsg is an offset from the beginning of that block.
gStructOffset = ^(memsz) + 1 + tlsg.Value // -tls.Memsz + tlsg.Value
case elf.EM_AARCH64:
tlsg := getSymbol(exe, "runtime.tls_g")
if tlsg == nil || tls == nil {
gStructOffset = 2 * uint64(PtrSize)
return
}
gStructOffset = tlsg.Value + uint64(PtrSize*2) + ((tls.Vaddr - uint64(PtrSize*2)) & (tls.Align - 1))
default:
// we should never get here
err = fmt.Errorf("architecture not supported")
}
return
}
func getGoidOffset(elfFile *elf.File) (goidOffset uint64, gStructOffset uint64, err error) {
var dwarfData *dwarf.Data
dwarfData, err = elfFile.DWARF()
if err != nil {
return
}
entryReader := dwarfData.Reader()
var runtimeGOffset uint64
var seenRuntimeG bool
for {
// Read all entries in sequence
var entry *dwarf.Entry
entry, err = entryReader.Next()
if err == io.EOF || entry == nil {
// We've reached the end of DWARF entries
break
}
// Check if this entry is a struct
if entry.Tag == dwarf.TagStructType {
// Go through fields
for _, field := range entry.Field {
if field.Attr == dwarf.AttrName {
val := field.Val.(string)
if val == "runtime.g" {
runtimeGOffset = uint64(entry.Offset)
seenRuntimeG = true
}
}
}
}
// Check if this entry is a struct member
if seenRuntimeG && entry.Tag == dwarf.TagMember {
// Go through fields
for _, field := range entry.Field {
if field.Attr == dwarf.AttrName {
val := field.Val.(string)
if val == "goid" {
goidOffset = uint64(entry.Offset) - runtimeGOffset - 0x4b
gStructOffset, err = getGStructOffset(elfFile)
return
}
}
}
}
}
err = fmt.Errorf("goid not found in DWARF")
return
}
func getOffsets(filePath string) (offsets map[string]*goExtendedOffset, goidOffset uint64, gStructOffset uint64, err error) {
var engine gapstone.Engine
switch runtime.GOARCH {
case "amd64":
@@ -104,13 +251,13 @@ func getOffsets(filePath string) (offsets map[string]*goExtendedOffset, err erro
}
defer fd.Close()
var se *elf.File
se, err = elf.NewFile(fd)
var elfFile *elf.File
elfFile, err = elf.NewFile(fd)
if err != nil {
return
}
textSection := se.Section(".text")
textSection := elfFile.Section(".text")
if textSection == nil {
err = fmt.Errorf("No text section")
return
@@ -124,7 +271,7 @@ func getOffsets(filePath string) (offsets map[string]*goExtendedOffset, err erro
}
var syms []elf.Symbol
syms, err = se.Symbols()
syms, err = elfFile.Symbols()
if err != nil {
return
}
@@ -132,7 +279,7 @@ func getOffsets(filePath string) (offsets map[string]*goExtendedOffset, err erro
offset := sym.Value
var lastProg *elf.Prog
for _, prog := range se.Progs {
for _, prog := range elfFile.Progs {
if prog.Vaddr <= sym.Value && sym.Value < (prog.Vaddr+prog.Memsz) {
offset = sym.Value - prog.Vaddr + prog.Off
lastProg = prog
@@ -189,6 +336,8 @@ func getOffsets(filePath string) (offsets map[string]*goExtendedOffset, err erro
offsets[sym.Name] = extendedOffset
}
goidOffset, gStructOffset, err = getGoidOffset(elfFile)
return
}
@@ -229,7 +378,7 @@ func checkGoVersion(filePath string, offset *goExtendedOffset) (bool, string, er
return false, goVersionStr, err
}
goVersionConstraint, err := semver.NewConstraint(fmt.Sprintf(">= %s", minimumSupportedGoVersion))
goVersionConstraint, err := semver.NewConstraint(fmt.Sprintf(">= %s", minimumABIInternalGoVersion))
if err != nil {
return false, goVersionStr, err
}

View File

@@ -14,6 +14,8 @@ type sslHooks struct {
sslWriteExRetProbe link.Link
sslReadExProbe link.Link
sslReadExRetProbe link.Link
tcpSendmsg link.Link
tcpRecvmsg link.Link
}
func (s *sslHooks) installUprobes(bpfObjects *tlsTapperObjects, sslLibraryPath string) error {
@@ -103,6 +105,16 @@ func (s *sslHooks) installSslHooks(bpfObjects *tlsTapperObjects, sslLibrary *lin
}
}
s.tcpSendmsg, err = link.Kprobe("tcp_sendmsg", bpfObjects.TcpSendmsg, nil)
if err != nil {
return errors.Wrap(err, 0)
}
s.tcpRecvmsg, err = link.Kprobe("tcp_recvmsg", bpfObjects.TcpRecvmsg, nil)
if err != nil {
return errors.Wrap(err, 0)
}
return nil
}
@@ -149,5 +161,17 @@ func (s *sslHooks) close() []error {
}
}
if s.tcpSendmsg != nil {
if err := s.tcpSendmsg.Close(); err != nil {
returnValue = append(returnValue, err)
}
}
if s.tcpRecvmsg != nil {
if err := s.tcpRecvmsg.Close(); err != nil {
returnValue = append(returnValue, err)
}
}
return returnValue
}

View File

@@ -17,37 +17,37 @@ type syscallHooks struct {
func (s *syscallHooks) installSyscallHooks(bpfObjects *tlsTapperObjects) error {
var err error
s.sysEnterRead, err = link.Tracepoint("syscalls", "sys_enter_read", bpfObjects.SysEnterRead)
s.sysEnterRead, err = link.Tracepoint("syscalls", "sys_enter_read", bpfObjects.SysEnterRead, nil)
if err != nil {
return errors.Wrap(err, 0)
}
s.sysEnterWrite, err = link.Tracepoint("syscalls", "sys_enter_write", bpfObjects.SysEnterWrite)
s.sysEnterWrite, err = link.Tracepoint("syscalls", "sys_enter_write", bpfObjects.SysEnterWrite, nil)
if err != nil {
return errors.Wrap(err, 0)
}
s.sysEnterAccept4, err = link.Tracepoint("syscalls", "sys_enter_accept4", bpfObjects.SysEnterAccept4)
s.sysEnterAccept4, err = link.Tracepoint("syscalls", "sys_enter_accept4", bpfObjects.SysEnterAccept4, nil)
if err != nil {
return errors.Wrap(err, 0)
}
s.sysExitAccept4, err = link.Tracepoint("syscalls", "sys_exit_accept4", bpfObjects.SysExitAccept4)
s.sysExitAccept4, err = link.Tracepoint("syscalls", "sys_exit_accept4", bpfObjects.SysExitAccept4, nil)
if err != nil {
return errors.Wrap(err, 0)
}
s.sysEnterConnect, err = link.Tracepoint("syscalls", "sys_enter_connect", bpfObjects.SysEnterConnect)
s.sysEnterConnect, err = link.Tracepoint("syscalls", "sys_enter_connect", bpfObjects.SysEnterConnect, nil)
if err != nil {
return errors.Wrap(err, 0)
}
s.sysExitConnect, err = link.Tracepoint("syscalls", "sys_exit_connect", bpfObjects.SysExitConnect)
s.sysExitConnect, err = link.Tracepoint("syscalls", "sys_exit_connect", bpfObjects.SysExitConnect, nil)
if err != nil {
return errors.Wrap(err, 0)

View File

@@ -134,14 +134,9 @@ func (p *tlsPoller) pollChunksPerfBuffer(chunks chan<- *tlsTapperTlsChunk) {
func (p *tlsPoller) handleTlsChunk(chunk *tlsTapperTlsChunk, extension *api.Extension, emitter api.Emitter,
options *api.TrafficFilteringOptions, streamsMap api.TcpStreamMap) error {
address, err := p.getSockfdAddressPair(chunk)
address, err := p.getAddressPair(chunk)
if err != nil {
address, err = chunk.getAddressPair()
if err != nil {
return err
}
return err
}
key := buildTlsKey(address)
@@ -161,6 +156,22 @@ func (p *tlsPoller) handleTlsChunk(chunk *tlsTapperTlsChunk, extension *api.Exte
return nil
}
func (p *tlsPoller) getAddressPair(chunk *tlsTapperTlsChunk) (addressPair, error) {
addrPairFromChunk, full := chunk.getAddressPair()
if full {
return addrPairFromChunk, nil
}
addrPairFromSockfd, err := p.getSockfdAddressPair(chunk)
if err == nil {
return addrPairFromSockfd, nil
} else {
logger.Log.Error("failed to get address from sock fd:", err)
}
return addrPairFromChunk, err
}
func (p *tlsPoller) startNewTlsReader(chunk *tlsTapperTlsChunk, address *addressPair, key string,
emitter api.Emitter, extension *api.Extension, options *api.TrafficFilteringOptions,
streamsMap api.TcpStreamMap) *tlsReader {

View File

@@ -6,13 +6,18 @@ import (
"github.com/cilium/ebpf/rlimit"
"github.com/go-errors/errors"
"github.com/moby/moby/pkg/parsers/kernel"
"github.com/up9inc/mizu/logger"
"github.com/up9inc/mizu/tap/api"
)
const GlobalTapPid = 0
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go@0d0727ef53e2f53b1731c73f4c61e0f58693083a -target $BPF_TARGET -cflags $BPF_CFLAGS -type tls_chunk tlsTapper bpf/tls_tapper.c
// TODO: cilium/ebpf does not support .kconfig Therefore; for now, we build object files per kernel version.
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go@v0.9.0 -target $BPF_TARGET -cflags $BPF_CFLAGS -type tls_chunk -type goid_offsets tlsTapper bpf/tls_tapper.c
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go@v0.9.0 -target $BPF_TARGET -cflags "${BPF_CFLAGS} -DKERNEL_BEFORE_4_6" -type tls_chunk -type goid_offsets tlsTapper46 bpf/tls_tapper.c
type TlsTapper struct {
bpfObjects tlsTapperObjects
@@ -27,13 +32,30 @@ type TlsTapper struct {
func (t *TlsTapper) Init(chunksBufferSize int, logBufferSize int, procfs string, extension *api.Extension) error {
logger.Log.Infof("Initializing tls tapper (chunksSize: %d) (logSize: %d)", chunksBufferSize, logBufferSize)
if err := setupRLimit(); err != nil {
var err error
err = setupRLimit()
if err != nil {
return err
}
var kernelVersion *kernel.VersionInfo
kernelVersion, err = kernel.GetKernelVersion()
if err != nil {
return err
}
logger.Log.Infof("Detected Linux kernel version: %s", kernelVersion)
t.bpfObjects = tlsTapperObjects{}
if err := loadTlsTapperObjects(&t.bpfObjects, nil); err != nil {
return errors.Wrap(err, 0)
// TODO: cilium/ebpf does not support .kconfig Therefore; for now, we load object files according to kernel version.
if kernel.CompareKernelVersion(*kernelVersion, kernel.VersionInfo{Kernel: 4, Major: 6, Minor: 0}) < 1 {
if err := loadTlsTapper46Objects(&t.bpfObjects, nil); err != nil {
return errors.Wrap(err, 0)
}
} else {
if err := loadTlsTapperObjects(&t.bpfObjects, nil); err != nil {
return errors.Wrap(err, 0)
}
}
t.syscallHooks = syscallHooks{}
@@ -48,7 +70,6 @@ func (t *TlsTapper) Init(chunksBufferSize int, logBufferSize int, procfs string,
return err
}
var err error
t.poller, err = newTlsPoller(t, extension, procfs)
if err != nil {

View File

@@ -0,0 +1,244 @@
// Code generated by bpf2go; DO NOT EDIT.
//go:build arm64
// +build arm64
package tlstapper
import (
"bytes"
_ "embed"
"fmt"
"io"
"github.com/cilium/ebpf"
)
type tlsTapper46GoidOffsets struct {
G_addrOffset uint64
GoidOffset uint64
}
type tlsTapper46TlsChunk struct {
Pid uint32
Tgid uint32
Len uint32
Start uint32
Recorded uint32
Fd uint32
Flags uint32
AddressInfo struct {
Mode int32
Saddr uint32
Daddr uint32
Sport uint16
Dport uint16
}
Data [4096]uint8
}
// loadTlsTapper46 returns the embedded CollectionSpec for tlsTapper46.
func loadTlsTapper46() (*ebpf.CollectionSpec, error) {
reader := bytes.NewReader(_TlsTapper46Bytes)
spec, err := ebpf.LoadCollectionSpecFromReader(reader)
if err != nil {
return nil, fmt.Errorf("can't load tlsTapper46: %w", err)
}
return spec, err
}
// loadTlsTapper46Objects loads tlsTapper46 and converts it into a struct.
//
// The following types are suitable as obj argument:
//
// *tlsTapper46Objects
// *tlsTapper46Programs
// *tlsTapper46Maps
//
// See ebpf.CollectionSpec.LoadAndAssign documentation for details.
func loadTlsTapper46Objects(obj interface{}, opts *ebpf.CollectionOptions) error {
spec, err := loadTlsTapper46()
if err != nil {
return err
}
return spec.LoadAndAssign(obj, opts)
}
// tlsTapper46Specs contains maps and programs before they are loaded into the kernel.
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapper46Specs struct {
tlsTapper46ProgramSpecs
tlsTapper46MapSpecs
}
// tlsTapper46Specs contains programs before they are loaded into the kernel.
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapper46ProgramSpecs struct {
GoCryptoTlsAbi0Read *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.ProgramSpec `ebpf:"ssl_read"`
SslReadEx *ebpf.ProgramSpec `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.ProgramSpec `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.ProgramSpec `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.ProgramSpec `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.ProgramSpec `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.ProgramSpec `ebpf:"ssl_write"`
SslWriteEx *ebpf.ProgramSpec `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.ProgramSpec `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.ProgramSpec `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.ProgramSpec `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.ProgramSpec `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.ProgramSpec `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.ProgramSpec `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.ProgramSpec `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.ProgramSpec `ebpf:"tcp_sendmsg"`
}
// tlsTapper46MapSpecs contains maps before they are loaded into the kernel.
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapper46MapSpecs struct {
AcceptSyscallContext *ebpf.MapSpec `ebpf:"accept_syscall_context"`
ChunksBuffer *ebpf.MapSpec `ebpf:"chunks_buffer"`
ConnectSyscallInfo *ebpf.MapSpec `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.MapSpec `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.MapSpec `ebpf:"go_read_context"`
GoWriteContext *ebpf.MapSpec `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.MapSpec `ebpf:"goid_offsets_map"`
Heap *ebpf.MapSpec `ebpf:"heap"`
LogBuffer *ebpf.MapSpec `ebpf:"log_buffer"`
OpensslReadContext *ebpf.MapSpec `ebpf:"openssl_read_context"`
OpensslWriteContext *ebpf.MapSpec `ebpf:"openssl_write_context"`
PidsMap *ebpf.MapSpec `ebpf:"pids_map"`
}
// tlsTapper46Objects contains all objects after they have been loaded into the kernel.
//
// It can be passed to loadTlsTapper46Objects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapper46Objects struct {
tlsTapper46Programs
tlsTapper46Maps
}
func (o *tlsTapper46Objects) Close() error {
return _TlsTapper46Close(
&o.tlsTapper46Programs,
&o.tlsTapper46Maps,
)
}
// tlsTapper46Maps contains all maps after they have been loaded into the kernel.
//
// It can be passed to loadTlsTapper46Objects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapper46Maps struct {
AcceptSyscallContext *ebpf.Map `ebpf:"accept_syscall_context"`
ChunksBuffer *ebpf.Map `ebpf:"chunks_buffer"`
ConnectSyscallInfo *ebpf.Map `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.Map `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.Map `ebpf:"go_read_context"`
GoWriteContext *ebpf.Map `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.Map `ebpf:"goid_offsets_map"`
Heap *ebpf.Map `ebpf:"heap"`
LogBuffer *ebpf.Map `ebpf:"log_buffer"`
OpensslReadContext *ebpf.Map `ebpf:"openssl_read_context"`
OpensslWriteContext *ebpf.Map `ebpf:"openssl_write_context"`
PidsMap *ebpf.Map `ebpf:"pids_map"`
}
func (m *tlsTapper46Maps) Close() error {
return _TlsTapper46Close(
m.AcceptSyscallContext,
m.ChunksBuffer,
m.ConnectSyscallInfo,
m.FileDescriptorToIpv4,
m.GoReadContext,
m.GoWriteContext,
m.GoidOffsetsMap,
m.Heap,
m.LogBuffer,
m.OpensslReadContext,
m.OpensslWriteContext,
m.PidsMap,
)
}
// tlsTapper46Programs contains all programs after they have been loaded into the kernel.
//
// It can be passed to loadTlsTapper46Objects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapper46Programs struct {
GoCryptoTlsAbi0Read *ebpf.Program `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.Program `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.Program `ebpf:"ssl_read"`
SslReadEx *ebpf.Program `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.Program `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.Program `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.Program `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.Program `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.Program `ebpf:"ssl_write"`
SslWriteEx *ebpf.Program `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.Program `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.Program `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.Program `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.Program `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.Program `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.Program `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.Program `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.Program `ebpf:"tcp_sendmsg"`
}
func (p *tlsTapper46Programs) Close() error {
return _TlsTapper46Close(
p.GoCryptoTlsAbi0Read,
p.GoCryptoTlsAbi0ReadEx,
p.GoCryptoTlsAbi0Write,
p.GoCryptoTlsAbi0WriteEx,
p.GoCryptoTlsAbiInternalRead,
p.GoCryptoTlsAbiInternalReadEx,
p.GoCryptoTlsAbiInternalWrite,
p.GoCryptoTlsAbiInternalWriteEx,
p.SslRead,
p.SslReadEx,
p.SslRetRead,
p.SslRetReadEx,
p.SslRetWrite,
p.SslRetWriteEx,
p.SslWrite,
p.SslWriteEx,
p.SysEnterAccept4,
p.SysEnterConnect,
p.SysEnterRead,
p.SysEnterWrite,
p.SysExitAccept4,
p.SysExitConnect,
p.TcpRecvmsg,
p.TcpSendmsg,
)
}
func _TlsTapper46Close(closers ...io.Closer) error {
for _, closer := range closers {
if err := closer.Close(); err != nil {
return err
}
}
return nil
}
// Do not access this directly.
//go:embed tlstapper46_bpfel_arm64.o
var _TlsTapper46Bytes []byte

View File

@@ -0,0 +1,244 @@
// Code generated by bpf2go; DO NOT EDIT.
//go:build 386 || amd64
// +build 386 amd64
package tlstapper
import (
"bytes"
_ "embed"
"fmt"
"io"
"github.com/cilium/ebpf"
)
type tlsTapper46GoidOffsets struct {
G_addrOffset uint64
GoidOffset uint64
}
type tlsTapper46TlsChunk struct {
Pid uint32
Tgid uint32
Len uint32
Start uint32
Recorded uint32
Fd uint32
Flags uint32
AddressInfo struct {
Mode int32
Saddr uint32
Daddr uint32
Sport uint16
Dport uint16
}
Data [4096]uint8
}
// loadTlsTapper46 returns the embedded CollectionSpec for tlsTapper46.
func loadTlsTapper46() (*ebpf.CollectionSpec, error) {
reader := bytes.NewReader(_TlsTapper46Bytes)
spec, err := ebpf.LoadCollectionSpecFromReader(reader)
if err != nil {
return nil, fmt.Errorf("can't load tlsTapper46: %w", err)
}
return spec, err
}
// loadTlsTapper46Objects loads tlsTapper46 and converts it into a struct.
//
// The following types are suitable as obj argument:
//
// *tlsTapper46Objects
// *tlsTapper46Programs
// *tlsTapper46Maps
//
// See ebpf.CollectionSpec.LoadAndAssign documentation for details.
func loadTlsTapper46Objects(obj interface{}, opts *ebpf.CollectionOptions) error {
spec, err := loadTlsTapper46()
if err != nil {
return err
}
return spec.LoadAndAssign(obj, opts)
}
// tlsTapper46Specs contains maps and programs before they are loaded into the kernel.
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapper46Specs struct {
tlsTapper46ProgramSpecs
tlsTapper46MapSpecs
}
// tlsTapper46Specs contains programs before they are loaded into the kernel.
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapper46ProgramSpecs struct {
GoCryptoTlsAbi0Read *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.ProgramSpec `ebpf:"ssl_read"`
SslReadEx *ebpf.ProgramSpec `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.ProgramSpec `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.ProgramSpec `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.ProgramSpec `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.ProgramSpec `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.ProgramSpec `ebpf:"ssl_write"`
SslWriteEx *ebpf.ProgramSpec `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.ProgramSpec `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.ProgramSpec `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.ProgramSpec `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.ProgramSpec `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.ProgramSpec `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.ProgramSpec `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.ProgramSpec `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.ProgramSpec `ebpf:"tcp_sendmsg"`
}
// tlsTapper46MapSpecs contains maps before they are loaded into the kernel.
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapper46MapSpecs struct {
AcceptSyscallContext *ebpf.MapSpec `ebpf:"accept_syscall_context"`
ChunksBuffer *ebpf.MapSpec `ebpf:"chunks_buffer"`
ConnectSyscallInfo *ebpf.MapSpec `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.MapSpec `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.MapSpec `ebpf:"go_read_context"`
GoWriteContext *ebpf.MapSpec `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.MapSpec `ebpf:"goid_offsets_map"`
Heap *ebpf.MapSpec `ebpf:"heap"`
LogBuffer *ebpf.MapSpec `ebpf:"log_buffer"`
OpensslReadContext *ebpf.MapSpec `ebpf:"openssl_read_context"`
OpensslWriteContext *ebpf.MapSpec `ebpf:"openssl_write_context"`
PidsMap *ebpf.MapSpec `ebpf:"pids_map"`
}
// tlsTapper46Objects contains all objects after they have been loaded into the kernel.
//
// It can be passed to loadTlsTapper46Objects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapper46Objects struct {
tlsTapper46Programs
tlsTapper46Maps
}
func (o *tlsTapper46Objects) Close() error {
return _TlsTapper46Close(
&o.tlsTapper46Programs,
&o.tlsTapper46Maps,
)
}
// tlsTapper46Maps contains all maps after they have been loaded into the kernel.
//
// It can be passed to loadTlsTapper46Objects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapper46Maps struct {
AcceptSyscallContext *ebpf.Map `ebpf:"accept_syscall_context"`
ChunksBuffer *ebpf.Map `ebpf:"chunks_buffer"`
ConnectSyscallInfo *ebpf.Map `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.Map `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.Map `ebpf:"go_read_context"`
GoWriteContext *ebpf.Map `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.Map `ebpf:"goid_offsets_map"`
Heap *ebpf.Map `ebpf:"heap"`
LogBuffer *ebpf.Map `ebpf:"log_buffer"`
OpensslReadContext *ebpf.Map `ebpf:"openssl_read_context"`
OpensslWriteContext *ebpf.Map `ebpf:"openssl_write_context"`
PidsMap *ebpf.Map `ebpf:"pids_map"`
}
func (m *tlsTapper46Maps) Close() error {
return _TlsTapper46Close(
m.AcceptSyscallContext,
m.ChunksBuffer,
m.ConnectSyscallInfo,
m.FileDescriptorToIpv4,
m.GoReadContext,
m.GoWriteContext,
m.GoidOffsetsMap,
m.Heap,
m.LogBuffer,
m.OpensslReadContext,
m.OpensslWriteContext,
m.PidsMap,
)
}
// tlsTapper46Programs contains all programs after they have been loaded into the kernel.
//
// It can be passed to loadTlsTapper46Objects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapper46Programs struct {
GoCryptoTlsAbi0Read *ebpf.Program `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.Program `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.Program `ebpf:"ssl_read"`
SslReadEx *ebpf.Program `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.Program `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.Program `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.Program `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.Program `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.Program `ebpf:"ssl_write"`
SslWriteEx *ebpf.Program `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.Program `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.Program `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.Program `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.Program `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.Program `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.Program `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.Program `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.Program `ebpf:"tcp_sendmsg"`
}
func (p *tlsTapper46Programs) Close() error {
return _TlsTapper46Close(
p.GoCryptoTlsAbi0Read,
p.GoCryptoTlsAbi0ReadEx,
p.GoCryptoTlsAbi0Write,
p.GoCryptoTlsAbi0WriteEx,
p.GoCryptoTlsAbiInternalRead,
p.GoCryptoTlsAbiInternalReadEx,
p.GoCryptoTlsAbiInternalWrite,
p.GoCryptoTlsAbiInternalWriteEx,
p.SslRead,
p.SslReadEx,
p.SslRetRead,
p.SslRetReadEx,
p.SslRetWrite,
p.SslRetWriteEx,
p.SslWrite,
p.SslWriteEx,
p.SysEnterAccept4,
p.SysEnterConnect,
p.SysEnterRead,
p.SysEnterWrite,
p.SysExitAccept4,
p.SysExitConnect,
p.TcpRecvmsg,
p.TcpSendmsg,
)
}
func _TlsTapper46Close(closers ...io.Closer) error {
for _, closer := range closers {
if err := closer.Close(); err != nil {
return err
}
}
return nil
}
// Do not access this directly.
//go:embed tlstapper46_bpfel_x86.o
var _TlsTapper46Bytes []byte

View File

@@ -13,16 +13,27 @@ import (
"github.com/cilium/ebpf"
)
type tlsTapperGoidOffsets struct {
G_addrOffset uint64
GoidOffset uint64
}
type tlsTapperTlsChunk struct {
Pid uint32
Tgid uint32
Len uint32
Start uint32
Recorded uint32
Fd uint32
Flags uint32
Address [16]uint8
Data [4096]uint8
Pid uint32
Tgid uint32
Len uint32
Start uint32
Recorded uint32
Fd uint32
Flags uint32
AddressInfo struct {
Mode int32
Saddr uint32
Daddr uint32
Sport uint16
Dport uint16
}
Data [4096]uint8
}
// loadTlsTapper returns the embedded CollectionSpec for tlsTapper.
@@ -66,24 +77,30 @@ type tlsTapperSpecs struct {
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapperProgramSpecs struct {
GoCryptoTlsRead *ebpf.ProgramSpec `ebpf:"go_crypto_tls_read"`
GoCryptoTlsReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_read_ex"`
GoCryptoTlsWrite *ebpf.ProgramSpec `ebpf:"go_crypto_tls_write"`
GoCryptoTlsWriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_write_ex"`
SslRead *ebpf.ProgramSpec `ebpf:"ssl_read"`
SslReadEx *ebpf.ProgramSpec `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.ProgramSpec `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.ProgramSpec `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.ProgramSpec `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.ProgramSpec `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.ProgramSpec `ebpf:"ssl_write"`
SslWriteEx *ebpf.ProgramSpec `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.ProgramSpec `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.ProgramSpec `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.ProgramSpec `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.ProgramSpec `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.ProgramSpec `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.ProgramSpec `ebpf:"sys_exit_connect"`
GoCryptoTlsAbi0Read *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.ProgramSpec `ebpf:"ssl_read"`
SslReadEx *ebpf.ProgramSpec `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.ProgramSpec `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.ProgramSpec `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.ProgramSpec `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.ProgramSpec `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.ProgramSpec `ebpf:"ssl_write"`
SslWriteEx *ebpf.ProgramSpec `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.ProgramSpec `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.ProgramSpec `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.ProgramSpec `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.ProgramSpec `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.ProgramSpec `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.ProgramSpec `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.ProgramSpec `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.ProgramSpec `ebpf:"tcp_sendmsg"`
}
// tlsTapperMapSpecs contains maps before they are loaded into the kernel.
@@ -96,6 +113,7 @@ type tlsTapperMapSpecs struct {
FileDescriptorToIpv4 *ebpf.MapSpec `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.MapSpec `ebpf:"go_read_context"`
GoWriteContext *ebpf.MapSpec `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.MapSpec `ebpf:"goid_offsets_map"`
Heap *ebpf.MapSpec `ebpf:"heap"`
LogBuffer *ebpf.MapSpec `ebpf:"log_buffer"`
OpensslReadContext *ebpf.MapSpec `ebpf:"openssl_read_context"`
@@ -128,6 +146,7 @@ type tlsTapperMaps struct {
FileDescriptorToIpv4 *ebpf.Map `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.Map `ebpf:"go_read_context"`
GoWriteContext *ebpf.Map `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.Map `ebpf:"goid_offsets_map"`
Heap *ebpf.Map `ebpf:"heap"`
LogBuffer *ebpf.Map `ebpf:"log_buffer"`
OpensslReadContext *ebpf.Map `ebpf:"openssl_read_context"`
@@ -143,6 +162,7 @@ func (m *tlsTapperMaps) Close() error {
m.FileDescriptorToIpv4,
m.GoReadContext,
m.GoWriteContext,
m.GoidOffsetsMap,
m.Heap,
m.LogBuffer,
m.OpensslReadContext,
@@ -155,32 +175,42 @@ func (m *tlsTapperMaps) Close() error {
//
// It can be passed to loadTlsTapperObjects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapperPrograms struct {
GoCryptoTlsRead *ebpf.Program `ebpf:"go_crypto_tls_read"`
GoCryptoTlsReadEx *ebpf.Program `ebpf:"go_crypto_tls_read_ex"`
GoCryptoTlsWrite *ebpf.Program `ebpf:"go_crypto_tls_write"`
GoCryptoTlsWriteEx *ebpf.Program `ebpf:"go_crypto_tls_write_ex"`
SslRead *ebpf.Program `ebpf:"ssl_read"`
SslReadEx *ebpf.Program `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.Program `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.Program `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.Program `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.Program `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.Program `ebpf:"ssl_write"`
SslWriteEx *ebpf.Program `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.Program `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.Program `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.Program `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.Program `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.Program `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.Program `ebpf:"sys_exit_connect"`
GoCryptoTlsAbi0Read *ebpf.Program `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.Program `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.Program `ebpf:"ssl_read"`
SslReadEx *ebpf.Program `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.Program `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.Program `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.Program `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.Program `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.Program `ebpf:"ssl_write"`
SslWriteEx *ebpf.Program `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.Program `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.Program `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.Program `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.Program `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.Program `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.Program `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.Program `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.Program `ebpf:"tcp_sendmsg"`
}
func (p *tlsTapperPrograms) Close() error {
return _TlsTapperClose(
p.GoCryptoTlsRead,
p.GoCryptoTlsReadEx,
p.GoCryptoTlsWrite,
p.GoCryptoTlsWriteEx,
p.GoCryptoTlsAbi0Read,
p.GoCryptoTlsAbi0ReadEx,
p.GoCryptoTlsAbi0Write,
p.GoCryptoTlsAbi0WriteEx,
p.GoCryptoTlsAbiInternalRead,
p.GoCryptoTlsAbiInternalReadEx,
p.GoCryptoTlsAbiInternalWrite,
p.GoCryptoTlsAbiInternalWriteEx,
p.SslRead,
p.SslReadEx,
p.SslRetRead,
@@ -195,6 +225,8 @@ func (p *tlsTapperPrograms) Close() error {
p.SysEnterWrite,
p.SysExitAccept4,
p.SysExitConnect,
p.TcpRecvmsg,
p.TcpSendmsg,
)
}

View File

@@ -13,16 +13,27 @@ import (
"github.com/cilium/ebpf"
)
type tlsTapperGoidOffsets struct {
G_addrOffset uint64
GoidOffset uint64
}
type tlsTapperTlsChunk struct {
Pid uint32
Tgid uint32
Len uint32
Start uint32
Recorded uint32
Fd uint32
Flags uint32
Address [16]uint8
Data [4096]uint8
Pid uint32
Tgid uint32
Len uint32
Start uint32
Recorded uint32
Fd uint32
Flags uint32
AddressInfo struct {
Mode int32
Saddr uint32
Daddr uint32
Sport uint16
Dport uint16
}
Data [4096]uint8
}
// loadTlsTapper returns the embedded CollectionSpec for tlsTapper.
@@ -66,24 +77,30 @@ type tlsTapperSpecs struct {
//
// It can be passed ebpf.CollectionSpec.Assign.
type tlsTapperProgramSpecs struct {
GoCryptoTlsRead *ebpf.ProgramSpec `ebpf:"go_crypto_tls_read"`
GoCryptoTlsReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_read_ex"`
GoCryptoTlsWrite *ebpf.ProgramSpec `ebpf:"go_crypto_tls_write"`
GoCryptoTlsWriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_write_ex"`
SslRead *ebpf.ProgramSpec `ebpf:"ssl_read"`
SslReadEx *ebpf.ProgramSpec `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.ProgramSpec `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.ProgramSpec `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.ProgramSpec `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.ProgramSpec `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.ProgramSpec `ebpf:"ssl_write"`
SslWriteEx *ebpf.ProgramSpec `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.ProgramSpec `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.ProgramSpec `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.ProgramSpec `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.ProgramSpec `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.ProgramSpec `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.ProgramSpec `ebpf:"sys_exit_connect"`
GoCryptoTlsAbi0Read *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.ProgramSpec `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.ProgramSpec `ebpf:"ssl_read"`
SslReadEx *ebpf.ProgramSpec `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.ProgramSpec `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.ProgramSpec `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.ProgramSpec `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.ProgramSpec `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.ProgramSpec `ebpf:"ssl_write"`
SslWriteEx *ebpf.ProgramSpec `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.ProgramSpec `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.ProgramSpec `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.ProgramSpec `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.ProgramSpec `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.ProgramSpec `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.ProgramSpec `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.ProgramSpec `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.ProgramSpec `ebpf:"tcp_sendmsg"`
}
// tlsTapperMapSpecs contains maps before they are loaded into the kernel.
@@ -96,6 +113,7 @@ type tlsTapperMapSpecs struct {
FileDescriptorToIpv4 *ebpf.MapSpec `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.MapSpec `ebpf:"go_read_context"`
GoWriteContext *ebpf.MapSpec `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.MapSpec `ebpf:"goid_offsets_map"`
Heap *ebpf.MapSpec `ebpf:"heap"`
LogBuffer *ebpf.MapSpec `ebpf:"log_buffer"`
OpensslReadContext *ebpf.MapSpec `ebpf:"openssl_read_context"`
@@ -128,6 +146,7 @@ type tlsTapperMaps struct {
FileDescriptorToIpv4 *ebpf.Map `ebpf:"file_descriptor_to_ipv4"`
GoReadContext *ebpf.Map `ebpf:"go_read_context"`
GoWriteContext *ebpf.Map `ebpf:"go_write_context"`
GoidOffsetsMap *ebpf.Map `ebpf:"goid_offsets_map"`
Heap *ebpf.Map `ebpf:"heap"`
LogBuffer *ebpf.Map `ebpf:"log_buffer"`
OpensslReadContext *ebpf.Map `ebpf:"openssl_read_context"`
@@ -143,6 +162,7 @@ func (m *tlsTapperMaps) Close() error {
m.FileDescriptorToIpv4,
m.GoReadContext,
m.GoWriteContext,
m.GoidOffsetsMap,
m.Heap,
m.LogBuffer,
m.OpensslReadContext,
@@ -155,32 +175,42 @@ func (m *tlsTapperMaps) Close() error {
//
// It can be passed to loadTlsTapperObjects or ebpf.CollectionSpec.LoadAndAssign.
type tlsTapperPrograms struct {
GoCryptoTlsRead *ebpf.Program `ebpf:"go_crypto_tls_read"`
GoCryptoTlsReadEx *ebpf.Program `ebpf:"go_crypto_tls_read_ex"`
GoCryptoTlsWrite *ebpf.Program `ebpf:"go_crypto_tls_write"`
GoCryptoTlsWriteEx *ebpf.Program `ebpf:"go_crypto_tls_write_ex"`
SslRead *ebpf.Program `ebpf:"ssl_read"`
SslReadEx *ebpf.Program `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.Program `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.Program `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.Program `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.Program `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.Program `ebpf:"ssl_write"`
SslWriteEx *ebpf.Program `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.Program `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.Program `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.Program `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.Program `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.Program `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.Program `ebpf:"sys_exit_connect"`
GoCryptoTlsAbi0Read *ebpf.Program `ebpf:"go_crypto_tls_abi0_read"`
GoCryptoTlsAbi0ReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_read_ex"`
GoCryptoTlsAbi0Write *ebpf.Program `ebpf:"go_crypto_tls_abi0_write"`
GoCryptoTlsAbi0WriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi0_write_ex"`
GoCryptoTlsAbiInternalRead *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read"`
GoCryptoTlsAbiInternalReadEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_read_ex"`
GoCryptoTlsAbiInternalWrite *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write"`
GoCryptoTlsAbiInternalWriteEx *ebpf.Program `ebpf:"go_crypto_tls_abi_internal_write_ex"`
SslRead *ebpf.Program `ebpf:"ssl_read"`
SslReadEx *ebpf.Program `ebpf:"ssl_read_ex"`
SslRetRead *ebpf.Program `ebpf:"ssl_ret_read"`
SslRetReadEx *ebpf.Program `ebpf:"ssl_ret_read_ex"`
SslRetWrite *ebpf.Program `ebpf:"ssl_ret_write"`
SslRetWriteEx *ebpf.Program `ebpf:"ssl_ret_write_ex"`
SslWrite *ebpf.Program `ebpf:"ssl_write"`
SslWriteEx *ebpf.Program `ebpf:"ssl_write_ex"`
SysEnterAccept4 *ebpf.Program `ebpf:"sys_enter_accept4"`
SysEnterConnect *ebpf.Program `ebpf:"sys_enter_connect"`
SysEnterRead *ebpf.Program `ebpf:"sys_enter_read"`
SysEnterWrite *ebpf.Program `ebpf:"sys_enter_write"`
SysExitAccept4 *ebpf.Program `ebpf:"sys_exit_accept4"`
SysExitConnect *ebpf.Program `ebpf:"sys_exit_connect"`
TcpRecvmsg *ebpf.Program `ebpf:"tcp_recvmsg"`
TcpSendmsg *ebpf.Program `ebpf:"tcp_sendmsg"`
}
func (p *tlsTapperPrograms) Close() error {
return _TlsTapperClose(
p.GoCryptoTlsRead,
p.GoCryptoTlsReadEx,
p.GoCryptoTlsWrite,
p.GoCryptoTlsWriteEx,
p.GoCryptoTlsAbi0Read,
p.GoCryptoTlsAbi0ReadEx,
p.GoCryptoTlsAbi0Write,
p.GoCryptoTlsAbi0WriteEx,
p.GoCryptoTlsAbiInternalRead,
p.GoCryptoTlsAbiInternalReadEx,
p.GoCryptoTlsAbiInternalWrite,
p.GoCryptoTlsAbiInternalWriteEx,
p.SslRead,
p.SslReadEx,
p.SslRetRead,
@@ -195,6 +225,8 @@ func (p *tlsTapperPrograms) Close() error {
p.SysEnterWrite,
p.SysExitAccept4,
p.SysExitConnect,
p.TcpRecvmsg,
p.TcpSendmsg,
)
}

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -26,14 +26,15 @@
"@craco/craco": "^6.4.3",
"@types/jest": "^26.0.24",
"@types/node": "^12.20.54",
"sass": "^1.52.3",
"react": "^17.0.2",
"react-copy-to-clipboard": "^5.1.0",
"react-dom": "^17.0.2",
"recoil": "^0.7.2"
"recoil": "^0.7.2",
"sass": "^1.52.3"
},
"dependencies": {
"@craco/craco": "^6.4.3",
"@elastic/eui": "^60.2.0",
"@emotion/react": "^11.9.0",
"@emotion/styled": "^11.8.1",
"@mui/icons-material": "^5.8.2",
@@ -42,6 +43,7 @@
"@mui/styles": "^5.8.0",
"@types/lodash": "^4.14.182",
"@uiw/react-textarea-code-editor": "^1.6.0",
"ace-builds": "^1.6.0",
"axios": "^0.27.2",
"core-js": "^3.22.7",
"highlight.js": "^11.5.1",
@@ -54,6 +56,7 @@
"node-fetch": "^3.2.4",
"numeral": "^2.0.6",
"protobuf-decoder": "^0.1.2",
"react-ace": "^9.0.0",
"react-graph-vis": "^1.0.7",
"react-lowlight": "^3.0.0",
"react-router-dom": "^6.3.0",
@@ -63,12 +66,14 @@
"recharts": "^2.1.10",
"redoc": "^2.0.0-rc.71",
"styled-components": "^5.3.5",
"use-file-picker": "^1.4.2",
"web-vitals": "^2.1.4",
"xml-formatter": "^2.6.1"
},
"devDependencies": {
"@rollup/plugin-node-resolve": "^13.3.0",
"@svgr/rollup": "^6.2.1",
"@types/ace": "^0.0.48",
"cross-env": "^7.0.3",
"env-cmd": "^10.1.0",
"gh-pages": "^4.0.0",
@@ -81,7 +86,7 @@
"rollup-plugin-postcss": "^4.0.2",
"rollup-plugin-sass": "^1.2.12",
"rollup-plugin-scss": "^3.0.0",
"typescript": "^4.7.2"
"typescript": "^4.5.3"
},
"eslintConfig": {
"extends": [
@@ -90,6 +95,7 @@
]
},
"files": [
"src/*.scss",
"dist"
]
}

Some files were not shown because too many files have changed in this diff Show More