Compare commits

...

40 Commits

Author SHA1 Message Date
Nimrod Gilboa Markevich
e09748baff Fix spelling in docs (#745)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-02-01 17:59:43 +02:00
Adam Kol
20fcc8e163 Cypress: prettier code (#743)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-02-01 17:46:07 +02:00
Igor Gov
fd64c1bb14 Refactor mizuserver module rename (#742)
Co-authored-by: Igor Gov <igor.govorov1@gmail.com>
2022-02-01 17:19:24 +02:00
Adam Kol
3a51ca21eb cypress body test fix (#741) 2022-02-01 15:12:01 +02:00
Igor Gov
0a2e55f7bc Fix lint errors file not found (#740)
Co-authored-by: Igor Gov <igor.govorov1@gmail.com>
2022-02-01 14:17:56 +02:00
Adam Kol
f7c200b821 Cypress: first step of the sanity test (#727)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
Co-authored-by: Roee Gadot <roee.gadot@up9.com>
2022-02-01 12:43:45 +02:00
M. Mert Yıldıran
0846a98bc1 Fix the Redis dissector (#739)
Co-authored-by: Igor Gov <iggvrv@gmail.com>
2022-02-01 13:24:56 +03:00
Igor Gov
602225bb36 Adding go lint to more modules (#738) 2022-02-01 12:08:55 +02:00
Adam Kol
c0f6f2a049 Cypress: Right side body test (#722) 2022-02-01 11:12:41 +02:00
Igor Gov
7846d812c1 Fix: minor error handling bug (#737)
Co-authored-by: Igor Gov <igor.govorov1@gmail.com>
2022-02-01 09:21:17 +02:00
Igor Gov
0f6c56986f Technical depth: Adding Go linter to CI (#734) 2022-02-01 08:47:26 +02:00
Alex Haiut
daf6b3db06 unhide svcmap + CHANGELOG (#724) 2022-01-31 17:20:36 +02:00
leon-up9
fdf552a9ec TRA-4169 apply query by enter (#733) 2022-01-31 17:08:03 +02:00
M. Mert Yıldıran
cbff1837c1 Fix protobuf prettify logic (#732) 2022-01-31 16:58:38 +02:00
AmitUp9
71425928cc toast removed (#735) 2022-01-31 16:48:40 +02:00
Igor Gov
82db4acb7d Build agent docker image during CI (#725) 2022-01-31 14:54:35 +02:00
M. Mert Yıldıran
9bd82191aa Fix the source-destination flip in Kafka (#729) 2022-01-31 14:46:22 +03:00
Igor Gov
83b657472b Revert "Fix: show agent-image in config after generation (#717)" (#728) 2022-01-31 13:20:36 +02:00
lirazyehezkel
2e5cf13b3f Mizu start dev commands (#726)
* start dev commands

* env files

* no message
2022-01-31 12:05:50 +02:00
gadotroee
4be7164f20 Fix routing in frontend not working (#723) 2022-01-31 10:49:44 +02:00
lirazyehezkel
0abd7c06ff Mizu fetch url (#721)
* Fix mizu fetch url

* fix env files

Co-authored-by: RoyUP9 <87927115+RoyUP9@users.noreply.github.com>
2022-01-30 17:05:09 +02:00
Nimrod Gilboa Markevich
c2739a68c2 Don't stop dissecting long lasting HTTP connections after 10 seconds timeout (#720)
* Set HTTP as protocol after parsing of first message

* Remove unnecessary local variable dissected
2022-01-30 14:28:51 +02:00
Igor Gov
d0ef6c9f97 Remove debug logs from agent (#719) 2022-01-30 12:36:01 +02:00
Igor Gov
a4f7e61a6e Fix proxy retries (#718) 2022-01-30 12:13:06 +02:00
RoyUP9
a5fef90781 Small fix in check resources (#716) 2022-01-30 09:36:22 +02:00
Igor Gov
70cef9dc4b Fix: show agent-image in config after generation (#717) 2022-01-30 09:25:56 +02:00
Igor Gov
0f3dd66d2d Experimental feature: elastic exporter (#713) 2022-01-30 09:22:13 +02:00
Igor Gov
5536e5bb44 Fixing minor bugs and remove unused dependency (#714) 2022-01-30 08:51:17 +02:00
M. Mert Yıldıran
3bab83754f Fix the interface conversion and index out of range errors in the Redis dissector (#710) 2022-01-27 22:58:43 +03:00
Andrey Pokhilko
d011478a74 OAS: series of small improvements (#700) 2022-01-27 16:17:21 +02:00
M. Mert Yıldıran
7fa1a191a6 TRA-4235 Move Basenine binary into the same agent image but run it as a separate container (#702)
* TRA-4235 Revert "Move Basenine binary into a separate container"

* Deploy the same agent image as a separate container for Basenine

Co-authored-by: Igor Gov <iggvrv@gmail.com>
2022-01-27 11:40:26 +03:00
gadotroee
65bb338ed6 Revert "TRA-4169 apply query by enter (#665)" (#703)
This reverts commit c098ff3323.
2022-01-26 19:32:49 +02:00
leon-up9
c098ff3323 TRA-4169 apply query by enter (#665) 2022-01-26 15:35:13 +02:00
gadotroee
19b4810ded Docker file best practices (#698)
(based on https://github.com/up9inc/mizu/pull/692/files)
2022-01-26 15:22:52 +02:00
Adam Kol
beb8363722 better looking code (#699) 2022-01-26 15:15:20 +02:00
Adam Kol
1eb67c69d9 Cypress: big UI test first version is ready (#689) 2022-01-26 13:18:25 +02:00
RoyUP9
be3375f797 Added post install connectivity check (#686) 2022-01-26 12:11:34 +02:00
Andrey Pokhilko
0c56c0f541 Have own HAR library code (#695) 2022-01-26 10:57:18 +02:00
Gustavo Massaneiro
843ac722c9 [TRA-4189] Traffic Volume in GB for General Status and telemetry (#696) 2022-01-26 10:49:42 +02:00
M. Mert Yıldıran
a9a61edd50 Add ARM64 and cross-compilation support to the agent image (#659)
* modified Dockerfile to work for both amd64 (Intel) and arm64 (M1)

* added changelog

* Update `Dockerfile` to have `ARCH` build argument

* Remove `docs/CHANGES.md`

* Upgrade the Basenine version from `v0.3.0` to `v0.4.6`

* Update `publish.yml` to have `ARCH` build argument

* Switch `BasenineImageRepo` to Docker Hub

* Have separate build arguments for `ARCH` and `GOARCH`

* Upgrade the Basenine version from `v0.4.6` to `v0.4.10`

* Oops forgot to update the 10th duplicated shell script

* Fix the oopsie and reduce duplications

* Fix `Dockerfile`

* Fix the incompatibility issue between Go plugins and gold linker in Alpine inside `Dockerfile`

* Fix `asm: xxhash_amd64.s:120: when dynamic linking, R15 is clobbered by a global variable access` error

* Update `Dockerfile` to have cross-compilation on an AMD64 machine

Also revert changes in the shell scripts

* Delete `debug.Dockerfile`

* Create a custom base (`debian:buster-slim` based) image for the shipped image

* Replace `mertyildiran/debian-pcap` with `up9inc/debian-pcap`

* Upgrade Basenine version to `v0.4.12`

* Use `debian:stable-slim` as the base

* Fix the indentation in the `Dockerfile`

* Update `publish.yml`

* Enable `publish.yml` for `feature/multiarch_build` branch

* Tag correctly and set `ARCH` Docker argument

* Remove the lines that are forgotten to be removed from the shell scripts

* Add `MizuAgentImageRepo` constant and use it as default `AgentImage` value

* Bring back `Set up Cloud SDK` step to `Build the CLI and publish` job

* Build ARM64 CLI for Linux as well

* Revert "Enable `publish.yml` for `feature/multiarch_build` branch"

This reverts commit d30be4c1f0.

* Revert Go 1.17 upgrade

* Remove `build_extensions_debug.sh` as well

* Make the `Dockerfile` to compile the agent statically

* Statically link the protocol extensions

* Fix `Dockerfile`

* Bring back `-s -w` flags

* Verify the signatures of the downloads in `dockcross/linux-arm64-musl`

* Revert modifications in some shell scripts

* Make the `BUILDARCH` and `TARGETARCH` separation in the `Dockerfile`

* Separate cross-compilation builder image into a separate repo named `up9inc/linux-arm64-musl-go-libpcap`

* Fill the shell script and specify the tag for `dockcross/linux-arm64-musl`

* Remove the unnecessary dependencies from `builder-native-base`

* Improve the comments in the `Dockerfile`

* Upgrade Basenine version to `v0.4.13`

* Fix `Dockerfile`

* Revert "Revert "Enable `publish.yml` for `feature/multiarch_build` branch""

This reverts commit 303e466bdc.

* Revert "Revert "Revert "Enable `publish.yml` for `feature/multiarch_build` branch"""

This reverts commit 0fe252bbdb.

* Remove `push-docker-debug` from the `Makefile`

* Rename `publish.yml` to `release.yml`

Co-authored-by: Alex Haiut <alex@up9.com>
2022-01-25 21:24:50 +03:00
164 changed files with 2705 additions and 2425 deletions

View File

@@ -1,4 +1,4 @@
name: PR validation
name: Build
on:
pull_request:
@@ -12,7 +12,7 @@ concurrency:
jobs:
build-cli:
name: Build CLI
name: CLI executable build
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
@@ -27,35 +27,11 @@ jobs:
run: make cli
build-agent:
name: Build Agent
name: Agent docker image build
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- shell: bash
run: |
sudo apt-get install libpcap-dev
- name: Build Agent
run: make agent
build-ui:
name: Build UI
runs-on: ubuntu-latest
steps:
- name: Set up Node 16
uses: actions/setup-node@v2
with:
node-version: '16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Build UI
run: make ui
run: make agent-docker

View File

@@ -1,95 +0,0 @@
name: publish
on:
push:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-publish-${{ github.ref }}
cancel-in-progress: true
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '1.16'
- name: Checkout
uses: actions/checkout@v2
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@master
with:
service_account_key: ${{ secrets.GCR_JSON_KEY }}
export_default_credentials: true
- uses: haya14busa/action-cond@v1
id: condval
with:
cond: ${{ github.ref == 'refs/heads/main' }}
if_true: "minor"
if_false: "patch"
- name: Auto Increment Semver Action
uses: MCKanpolat/auto-semver-action@1.0.5
id: versioning
with:
releaseType: ${{ steps.condval.outputs.value }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Get version parameters
shell: bash
run: |
echo "##[set-output name=build_timestamp;]$(echo $(date +%s))"
echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: version_parameters
- name: Get base image name
shell: bash
run: echo "##[set-output name=image;]$(echo gcr.io/up9-docker-hub/mizu/${GITHUB_REF#refs/heads/})"
id: base_image_step
- name: Docker meta
id: meta
uses: crazy-max/ghaction-docker-meta@v2
with:
images: |
${{ steps.base_image_step.outputs.image }}
up9inc/mizu
tags: |
type=raw,${{ steps.versioning.outputs.version }}
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Login to GCR
uses: docker/login-action@v1
with:
registry: gcr.io
username: _json_key
password: ${{ secrets.GCR_JSON_KEY }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
SEM_VER=${{ steps.versioning.outputs.version }}
BUILD_TIMESTAMP=${{ steps.version_parameters.outputs.build_timestamp }}
GIT_BRANCH=${{ steps.version_parameters.outputs.branch }}
COMMIT_HASH=${{ github.sha }}
- name: Build and Push CLI
run: make push-cli SEM_VER='${{ steps.versioning.outputs.version }}' BUILD_TIMESTAMP='${{ steps.version_parameters.outputs.build_timestamp }}'
- shell: bash
run: |
echo '${{ steps.versioning.outputs.version }}' >> cli/bin/version.txt
- name: publish
uses: ncipollo/release-action@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
artifacts: "cli/bin/*"
commit: ${{ steps.version_parameters.outputs.branch }}
tag: ${{ steps.versioning.outputs.version }}
prerelease: ${{ github.ref != 'refs/heads/main' }}
bodyFile: 'cli/bin/README.md'

278
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,278 @@
name: Release
on:
push:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-publish-${{ github.ref }}
cancel-in-progress: true
jobs:
docker-registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
strategy:
max-parallel: 2
fail-fast: false
matrix:
target:
- amd64
- arm64v8
steps:
- name: Check out the repo
uses: actions/checkout@v2
- name: Determine versioning strategy
uses: haya14busa/action-cond@v1
id: condval
with:
cond: ${{ github.ref == 'refs/heads/main' }}
if_true: "minor"
if_false: "patch"
- name: Auto increment SemVer action
uses: MCKanpolat/auto-semver-action@1.0.5
id: versioning
with:
releaseType: ${{ steps.condval.outputs.value }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Get version parameters
shell: bash
run: |
echo "##[set-output name=build_timestamp;]$(echo $(date +%s))"
echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: version_parameters
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: |
up9inc/mizu
tags: |
type=raw,${{ steps.versioning.outputs.version }}
flavor: |
latest=auto
prefix=
suffix=-${{ matrix.target }},onlatest=true
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
TARGETARCH=${{ matrix.target }}
SEM_VER=${{ steps.versioning.outputs.version }}
BUILD_TIMESTAMP=${{ steps.version_parameters.outputs.build_timestamp }}
GIT_BRANCH=${{ steps.version_parameters.outputs.branch }}
COMMIT_HASH=${{ github.sha }}
gcp-registry:
name: Push Docker image to GCR
runs-on: ubuntu-latest
strategy:
max-parallel: 2
fail-fast: false
matrix:
target:
- amd64
- arm64v8
steps:
- name: Check out the repo
uses: actions/checkout@v2
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@master
with:
service_account_key: ${{ secrets.GCR_JSON_KEY }}
export_default_credentials: true
- name: Determine versioning strategy
uses: haya14busa/action-cond@v1
id: condval
with:
cond: ${{ github.ref == 'refs/heads/main' }}
if_true: "minor"
if_false: "patch"
- name: Auto increment SemVer action
uses: MCKanpolat/auto-semver-action@1.0.5
id: versioning
with:
releaseType: ${{ steps.condval.outputs.value }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Get version parameters
shell: bash
run: |
echo "##[set-output name=build_timestamp;]$(echo $(date +%s))"
echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: version_parameters
- name: Get base image name
shell: bash
run: echo "##[set-output name=image;]$(echo gcr.io/up9-docker-hub/mizu/${GITHUB_REF#refs/heads/})"
id: base_image_step
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: |
${{ steps.base_image_step.outputs.image }}
tags: |
type=raw,${{ steps.versioning.outputs.version }}
flavor: |
latest=auto
prefix=
suffix=-${{ matrix.target }},onlatest=true
- name: Login to GCR
uses: docker/login-action@v1
with:
registry: gcr.io
username: _json_key
password: ${{ secrets.GCR_JSON_KEY }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
TARGETARCH=${{ matrix.target }}
SEM_VER=${{ steps.versioning.outputs.version }}
BUILD_TIMESTAMP=${{ steps.version_parameters.outputs.build_timestamp }}
GIT_BRANCH=${{ steps.version_parameters.outputs.branch }}
COMMIT_HASH=${{ github.sha }}
docker-manifest:
name: Create and Push a Docker Manifest
runs-on: ubuntu-latest
needs: [docker-registry]
steps:
- name: Determine versioning strategy
uses: haya14busa/action-cond@v1
id: condval
with:
cond: ${{ github.ref == 'refs/heads/main' }}
if_true: "minor"
if_false: "patch"
- name: Auto increment SemVer action
uses: MCKanpolat/auto-semver-action@1.0.5
id: versioning
with:
releaseType: ${{ steps.condval.outputs.value }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Get version parameters
shell: bash
run: |
echo "##[set-output name=build_timestamp;]$(echo $(date +%s))"
echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: version_parameters
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: |
up9inc/mizu
tags: |
type=raw,${{ steps.versioning.outputs.version }}
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Create manifest
run: |
while IFS= read -r line; do
docker manifest create $line --amend $line-amd64 --amend $line-arm64v8
done <<< "${{ steps.meta.outputs.tags }}"
- name: Push manifest
run: |
while IFS= read -r line; do
docker manifest push $line
done <<< "${{ steps.meta.outputs.tags }}"
cli:
name: Build the CLI and publish
runs-on: ubuntu-latest
needs: [docker-manifest, gcp-registry]
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '1.16'
- name: Check out the repo
uses: actions/checkout@v2
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@master
with:
service_account_key: ${{ secrets.GCR_JSON_KEY }}
export_default_credentials: true
- uses: haya14busa/action-cond@v1
id: condval
with:
cond: ${{ github.ref == 'refs/heads/main' }}
if_true: "minor"
if_false: "patch"
- name: Auto Increment Semver Action
uses: MCKanpolat/auto-semver-action@1.0.5
id: versioning
with:
releaseType: ${{ steps.condval.outputs.value }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Get version parameters
shell: bash
run: |
echo "##[set-output name=build_timestamp;]$(echo $(date +%s))"
echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
id: version_parameters
- name: Build and Push CLI
run: make push-cli SEM_VER='${{ steps.versioning.outputs.version }}' BUILD_TIMESTAMP='${{ steps.version_parameters.outputs.build_timestamp }}'
- name: Log the version into a .txt file
shell: bash
run: |
echo '${{ steps.versioning.outputs.version }}' >> cli/bin/version.txt
- name: Release
uses: ncipollo/release-action@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
artifacts: "cli/bin/*"
commit: ${{ steps.version_parameters.outputs.branch }}
tag: ${{ steps.versioning.outputs.version }}
prerelease: ${{ github.ref != 'refs/heads/main' }}
bodyFile: 'cli/bin/README.md'

View File

@@ -0,0 +1,90 @@
name: Static code analysis
on:
pull_request:
branches:
- 'develop'
- 'main'
permissions:
contents: read
jobs:
golangci:
name: Go lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y libpcap-dev
- name: Go lint - agent
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: agent
args: --timeout=3m
- name: Go lint - shared
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: shared
args: --timeout=3m
- name: Go lint - tap
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: tap
args: --timeout=3m
- name: Go lint - CLI
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: cli
args: --timeout=3m
- name: Go lint - acceptanceTests
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: acceptanceTests
args: --timeout=3m
- name: Go lint - tap/api
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: tap/api
- name: Go lint - tap/extensions/amqp
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: tap/extensions/amqp
- name: Go lint - tap/extensions/http
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: tap/extensions/http
- name: Go lint - tap/extensions/kafka
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: tap/extensions/kafka
- name: Go lint - tap/extensions/redis
uses: golangci/golangci-lint-action@v2
with:
version: latest
working-directory: tap/extensions/redis

View File

@@ -1,14 +1,10 @@
name: tests validation
name: Test
on:
pull_request:
branches:
- 'develop'
- 'main'
push:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-tests-validation-${{ github.ref }}
@@ -16,7 +12,7 @@ concurrency:
jobs:
run-tests-cli:
name: Run CLI tests
name: CLI Tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
@@ -34,7 +30,7 @@ jobs:
uses: codecov/codecov-action@v2
run-tests-agent:
name: Run Agent tests
name: Agent Tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16

View File

@@ -1,4 +1,8 @@
FROM node:16-slim AS site-build
ARG BUILDARCH=amd64
ARG TARGETARCH=amd64
### Front-end
FROM node:16 AS front-end
WORKDIR /app/ui-build
@@ -9,56 +13,93 @@ COPY ui .
RUN npm run build
RUN npm run build-ent
FROM golang:1.16-alpine AS builder
# Set necessary environment variables needed for our image.
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
### Base builder image for native builds architecture
FROM golang:1.16-alpine AS builder-native-base
ENV CGO_ENABLED=1 GOOS=linux
RUN apk add libpcap-dev g++ perl-utils
RUN apk add libpcap-dev gcc g++ make bash perl-utils
# Move to agent working directory (/agent-build).
### Intermediate builder image for x86-64 to x86-64 native builds
FROM builder-native-base AS builder-from-amd64-to-amd64
ENV GOARCH=amd64
### Intermediate builder image for AArch64 to AArch64 native builds
FROM builder-native-base AS builder-from-arm64v8-to-arm64v8
ENV GOARCH=arm64
### Builder image for x86-64 to AArch64 cross-compilation
FROM up9inc/linux-arm64-musl-go-libpcap AS builder-from-amd64-to-arm64v8
ENV CGO_ENABLED=1 GOOS=linux
ENV GOARCH=arm64 CGO_CFLAGS="-I/work/libpcap"
### Final builder image where the build happens
# Possible build strategies:
# BUILDARCH=amd64 TARGETARCH=amd64
# BUILDARCH=arm64v8 TARGETARCH=arm64v8
# BUILDARCH=amd64 TARGETARCH=arm64v8
ARG BUILDARCH=amd64
ARG TARGETARCH=amd64
FROM builder-from-${BUILDARCH}-to-${TARGETARCH} AS builder
# Move to agent working directory (/agent-build)
WORKDIR /app/agent-build
COPY agent/go.mod agent/go.sum ./
COPY shared/go.mod shared/go.mod ../shared/
COPY tap/go.mod tap/go.mod ../tap/
COPY tap/api/go.* ../tap/api/
COPY tap/api/go.mod ../tap/api/
COPY tap/extensions/amqp/go.mod ../tap/extensions/amqp/
COPY tap/extensions/http/go.mod ../tap/extensions/http/
COPY tap/extensions/kafka/go.mod ../tap/extensions/kafka/
COPY tap/extensions/redis/go.mod ../tap/extensions/redis/
RUN go mod download
# cheap trick to make the build faster (As long as go.mod wasn't changes)
# cheap trick to make the build faster (as long as go.mod did not change)
RUN go list -f '{{.Path}}@{{.Version}}' -m all | sed 1d | grep -e 'go-cache' | xargs go get
# Copy and build agent code
COPY shared ../shared
COPY tap ../tap
COPY agent .
ARG COMMIT_HASH
ARG GIT_BRANCH
ARG BUILD_TIMESTAMP
ARG SEM_VER=0.0.0
# Copy and build agent code
COPY shared ../shared
COPY tap ../tap
COPY agent .
RUN go build -ldflags="-s -w \
-X 'mizuserver/pkg/version.GitCommitHash=${COMMIT_HASH}' \
-X 'mizuserver/pkg/version.Branch=${GIT_BRANCH}' \
-X 'mizuserver/pkg/version.BuildTimestamp=${BUILD_TIMESTAMP}' \
-X 'mizuserver/pkg/version.SemVer=${SEM_VER}'" -o mizuagent .
WORKDIR /app/agent-build
COPY devops/build_extensions.sh ..
RUN cd .. && /bin/bash build_extensions.sh
RUN go build -ldflags="-extldflags=-static -s -w \
-X 'github.com/up9inc/mizu/agent/pkg/version.GitCommitHash=${COMMIT_HASH}' \
-X 'github.com/up9inc/mizu/agent/pkg/version.Branch=${GIT_BRANCH}' \
-X 'github.com/up9inc/mizu/agent/pkg/version.BuildTimestamp=${BUILD_TIMESTAMP}' \
-X 'github.com/up9inc/mizu/agent/pkg/version.SemVer=${SEM_VER}'" -o mizuagent .
FROM alpine:3.15
# Download Basenine executable, verify the sha1sum
ADD https://github.com/up9inc/basenine/releases/download/v0.4.13/basenine_linux_${GOARCH} ./basenine_linux_${GOARCH}
ADD https://github.com/up9inc/basenine/releases/download/v0.4.13/basenine_linux_${GOARCH}.sha256 ./basenine_linux_${GOARCH}.sha256
RUN shasum -a 256 -c basenine_linux_${GOARCH}.sha256
RUN chmod +x ./basenine_linux_${GOARCH}
RUN mv ./basenine_linux_${GOARCH} ./basenine
RUN apk add bash libpcap-dev
WORKDIR /app
# Copy binary and config files from /build to root folder of scratch container.
COPY --from=builder ["/app/agent-build/mizuagent", "."]
COPY --from=builder ["/app/agent/build/extensions", "extensions"]
COPY --from=site-build ["/app/ui-build/build", "site"]
COPY --from=site-build ["/app/ui-build/build-ent", "site-standalone"]
RUN mkdir /app/data/
### The shipped image
ARG TARGETARCH=amd64
FROM ${TARGETARCH}/busybox:latest
# gin-gonic runs in debug mode without this
ENV GIN_MODE=release
WORKDIR /app/data/
WORKDIR /app
# Copy binary and config files from /build to root folder of scratch container.
COPY --from=builder ["/app/agent-build/mizuagent", "."]
COPY --from=builder ["/app/agent-build/basenine", "/usr/local/bin/basenine"]
COPY --from=front-end ["/app/ui-build/build", "site"]
COPY --from=front-end ["/app/ui-build/build-ent", "site-standalone"]
# this script runs both apiserver and passivetapper and exits either if one of them exits, preventing a scenario where the container runs without one process
ENTRYPOINT "/app/mizuagent"
ENTRYPOINT ["/app/mizuagent"]

View File

@@ -8,7 +8,7 @@ SHELL=/bin/bash
# HELP
# This will output the help for each task
# thanks to https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
.PHONY: help ui extensions extensions-debug agent agent-debug cli tap docker
.PHONY: help ui agent agent-debug cli tap docker
help: ## This help.
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@@ -37,28 +37,26 @@ build-cli-ci: ## Build CLI for CI.
agent: ## Build agent.
@(echo "building mizu agent .." )
@(cd agent; go build -o build/mizuagent main.go)
${MAKE} extensions
@ls -l agent/build
agent-debug: ## Build agent for debug.
@(echo "building mizu agent for debug.." )
@(cd agent; go build -gcflags="all=-N -l" -o build/mizuagent main.go)
${MAKE} extensions-debug
@ls -l agent/build
docker: ## Build and publish agent docker image.
$(MAKE) push-docker
agent-docker: ## Build agent docker image.
@echo "Building agent docker image"
@docker build -t up9inc/mizu:devlatest .
push: push-docker push-cli ## Build and publish agent docker image & CLI.
push-docker: ## Build and publish agent docker image.
@echo "publishing Docker image .. "
devops/build-push-featurebranch.sh
push-docker-debug:
@echo "publishing debug Docker image .. "
devops/build-push-featurebranch-debug.sh
build-docker-ci: ## Build agent docker image for CI.
@echo "building docker image for ci"
devops/build-agent-ci.sh
@@ -71,7 +69,7 @@ push-cli: ## Build and publish CLI.
gsutil cp -r ./cli/bin/* gs://${BUCKET_PATH}/
gsutil setmeta -r -h "Cache-Control:public, max-age=30" gs://${BUCKET_PATH}/\*
clean: clean-ui clean-agent clean-cli clean-docker clean-extensions ## Clean all build artifacts.
clean: clean-ui clean-agent clean-cli clean-docker ## Clean all build artifacts.
clean-ui: ## Clean UI.
@(rm -rf ui/build ; echo "UI cleanup done" )
@@ -82,18 +80,9 @@ clean-agent: ## Clean agent.
clean-cli: ## Clean CLI.
@(cd cli; make clean ; echo "CLI cleanup done" )
clean-extensions: ## Clean extensions
@(rm -rf tap/extensions/*.so ; echo "Extensions cleanup done" )
clean-docker:
@(echo "DOCKER cleanup - NOT IMPLEMENTED YET " )
extensions-debug:
devops/build_extensions_debug.sh
extensions:
devops/build_extensions.sh
test-cli:
@echo "running cli tests"; cd cli && $(MAKE) test

View File

@@ -4,13 +4,16 @@
"viewportHeight": 1080,
"video": false,
"screenshotOnRunFailure": false,
"defaultCommandTimeout": 6000,
"testFiles": [
"tests/GuiPort.js",
"tests/MultipleNamespaces.js",
"tests/Redact.js",
"tests/NoRedact.js",
"tests/Regex.js",
"tests/RegexMasking.js"
"tests/RegexMasking.js",
"tests/IgnoredUserAgents.js",
"tests/UiTest.js"
],
"env": {

View File

@@ -1,14 +0,0 @@
{
"watchForFileChanges":false,
"viewportWidth": 1920,
"viewportHeight": 3500,
"video": false,
"screenshotOnRunFailure": false,
"testFiles": [
"tests/IgnoredUserAgents.js"
],
"env": {
"testUrl": "http://localhost:8899/"
}
}

View File

@@ -7,3 +7,12 @@ export function isValueExistsInElement(shouldInclude, content, domPathToContaine
});
});
}
export function resizeToHugeMizu() {
cy.viewport(1920, 3500);
}
export function resizeToNormalMizu() {
cy.viewport(1920, 1080);
}

View File

@@ -1,10 +1,11 @@
import {isValueExistsInElement} from "../testHelpers/TrafficHelper";
import {isValueExistsInElement, resizeToHugeMizu} from "../testHelpers/TrafficHelper";
it('Loading Mizu', function () {
cy.visit(Cypress.env('testUrl'));
});
it('going through each entry', function () {
resizeToHugeMizu();
cy.get('#total-entries').then(number => {
const getNum = () => {
const numOfEntries = number.text();

View File

@@ -0,0 +1,337 @@
import {findLineAndCheck, getExpectedDetailsDict} from "../testHelpers/StatusBarHelper";
import {resizeToHugeMizu, resizeToNormalMizu} from "../testHelpers/TrafficHelper";
const greenFilterColor = 'rgb(210, 250, 210)';
const redFilterColor = 'rgb(250, 214, 220)';
const refreshWaitTimeout = 10000;
const bodyJsonClass = '.hljs';
it('opening mizu', function () {
cy.visit(Cypress.env('testUrl'));
});
it('top bar check', function () {
const podName1 = 'httpbin', namespace1 = 'mizu-tests';
const podName2 = 'httpbin2', namespace2 = 'mizu-tests';
cy.get('.podsCount').trigger('mouseover');
findLineAndCheck(getExpectedDetailsDict(podName1, namespace1));
findLineAndCheck(getExpectedDetailsDict(podName2, namespace2));
cy.reload();
});
it('filtering guide check', function () {
cy.get('[title="Open Filtering Guide (Cheatsheet)"]').click();
cy.get('#modal-modal-title').should('be.visible');
cy.get('[lang="en"]').click(0, 0);
cy.get('#modal-modal-title').should('not.exist');
});
it('right side sanity test', function () {
cy.get('#entryDetailedTitleBodySize').then(sizeTopLine => {
const sizeOnTopLine = sizeTopLine.text().replace(' B', '');
cy.contains('Response').click();
cy.contains('Body Size (bytes)').parent().next().then(size => {
const bodySizeByDetails = size.text();
expect(sizeOnTopLine).to.equal(bodySizeByDetails, 'The body size in the top line should match the details in the response');
if (parseInt(bodySizeByDetails) < 0) {
throw new Error(`The body size cannot be negative. got the size: ${bodySizeByDetails}`)
}
cy.get('#entryDetailedTitleElapsedTime').then(timeInMs => {
const time = timeInMs.text();
if (time < '0ms') {
throw new Error(`The time in the top line cannot be negative ${time}`);
}
cy.get('#rightSideContainer [title="Status Code"]').then(status => {
const statusCode = status.text();
cy.contains('Status').parent().next().then(statusInDetails => {
const statusCodeInDetails = statusInDetails.text();
expect(statusCode).to.equal(statusCodeInDetails, 'The status code in the top line should match the status code in details');
});
});
});
});
});
});
checkIllegalFilter('invalid filter');
checkFilter({
name: 'http',
leftSidePath: '> :nth-child(1) > :nth-child(1)',
leftSideExpectedText: 'HTTP',
rightSidePath: '[title=HTTP]',
rightSideExpectedText: 'Hypertext Transfer Protocol -- HTTP/1.1',
applyByEnter: true
});
checkFilter({
name: 'response.status == 200',
leftSidePath: '[title="Status Code"]',
leftSideExpectedText: '200',
rightSidePath: '> :nth-child(2) [title="Status Code"]',
rightSideExpectedText: '200',
applyByEnter: false
});
checkFilter({
name: 'src.name == ""',
leftSidePath: '[title="Source Name"]',
leftSideExpectedText: '[Unresolved]',
rightSidePath: '> :nth-child(2) [title="Source Name"]',
rightSideExpectedText: '[Unresolved]',
applyByEnter: false
});
checkFilter({
name: 'method == "GET"',
leftSidePath: '> :nth-child(3) > :nth-child(1) > :nth-child(1) > :nth-child(2)',
leftSideExpectedText: 'GET',
rightSidePath: '> :nth-child(2) > :nth-child(2) > :nth-child(1) > :nth-child(1) > :nth-child(2)',
rightSideExpectedText: 'GET',
applyByEnter: true
});
checkFilter({
name: 'summary == "/get"',
leftSidePath: '> :nth-child(3) > :nth-child(1) > :nth-child(2) > :nth-child(2)',
leftSideExpectedText: '/get',
rightSidePath: '> :nth-child(2) > :nth-child(2) > :nth-child(1) > :nth-child(2) > :nth-child(2)',
rightSideExpectedText: '/get',
applyByEnter: false
});
checkFilter({
name: 'dst.name == "httpbin.mizu-tests"',
leftSidePath: '> :nth-child(3) > :nth-child(2) > :nth-child(3) > :nth-child(2)',
leftSideExpectedText: 'httpbin.mizu-tests',
rightSidePath: '> :nth-child(2) > :nth-child(2) > :nth-child(2) > :nth-child(3) > :nth-child(2)',
rightSideExpectedText: 'httpbin.mizu-tests',
applyByEnter: false
});
checkFilter({
name: 'src.ip == "127.0.0.1"',
leftSidePath: '[title="Source IP"]',
leftSideExpectedText: '127.0.0.1',
rightSidePath: '> :nth-child(2) [title="Source IP"]',
rightSideExpectedText: '127.0.0.1',
applyByEnter: false
});
checkFilterNoResults('method == "POST"');
function checkFilterNoResults(filterName) {
it(`checking the filter: ${filterName}. Expecting no results`, function () {
cy.get('#total-entries').then(number => {
const totalEntries = number.text();
// applying the filter
cy.get('.w-tc-editor-text').type(filterName);
cy.get('.w-tc-editor').should('have.attr', 'style').and('include', greenFilterColor);
cy.get('[type="submit"]').click();
// waiting for the entries number to load
cy.get('#total-entries', {timeout: refreshWaitTimeout}).should('have.text', totalEntries);
// the DOM should show 0 entries
cy.get('#entries-length').should('have.text', '0');
// going through every potential entry and verifies that it doesn't exist
[...Array(parseInt(totalEntries)).keys()].map(shouldNotExist);
cy.get('[title="Fetch old records"]').click();
cy.get('#noMoreDataTop', {timeout: refreshWaitTimeout}).should('be.visible');
cy.get('#entries-length').should('have.text', '0'); // after loading all entries there should still be 0 entries
// reloading then waiting for the entries number to load
cy.reload();
cy.get('#total-entries', {timeout: refreshWaitTimeout}).should('have.text', totalEntries);
});
});
}
function shouldNotExist(entryNum) {
cy.get(`entry-${entryNum}`).should('not.exist');
}
function checkIllegalFilter(illegalFilterName) {
it(`should show red search bar with the input: ${illegalFilterName}`, function () {
cy.get('#total-entries').then(number => {
const totalEntries = number.text();
cy.get('.w-tc-editor-text').type(illegalFilterName);
cy.get('.w-tc-editor').should('have.attr', 'style').and('include', redFilterColor);
cy.get('[type="submit"]').click();
cy.get('[role="alert"]').should('be.visible');
cy.get('.w-tc-editor-text').clear();
// reloading then waiting for the entries number to load
cy.reload();
cy.get('#total-entries', {timeout: refreshWaitTimeout}).should('have.text', totalEntries);
});
});
}
function checkFilter(filterDetails){
const {name, leftSidePath, rightSidePath, rightSideExpectedText, leftSideExpectedText, applyByEnter} = filterDetails;
const entriesForDeeperCheck = 5;
it(`checking the filter: ${name}`, function () {
cy.get('#total-entries').then(number => {
const totalEntries = number.text();
// checks the hover on the last entry (the only one in DOM at the beginning)
leftOnHoverCheck(totalEntries - 1, leftSidePath, name);
// applying the filter with alt+enter or with the button
cy.get('.w-tc-editor-text').type(`${name}${applyByEnter ? '{alt+enter}' : ''}`);
cy.get('.w-tc-editor').should('have.attr', 'style').and('include', greenFilterColor);
if (!applyByEnter)
cy.get('[type="submit"]').click();
// only one entry in DOM after filtering, checking all checks on it
leftTextCheck(totalEntries - 1, leftSidePath, leftSideExpectedText);
leftOnHoverCheck(totalEntries - 1, leftSidePath, name);
rightTextCheck(rightSidePath, rightSideExpectedText);
rightOnHoverCheck(rightSidePath, name);
checkRightSideResponseBody();
cy.get('[title="Fetch old records"]').click();
resizeToHugeMizu();
// waiting for the entries number to load
cy.get('#entries-length', {timeout: refreshWaitTimeout}).should('have.text', totalEntries);
// checking only 'leftTextCheck' on all entries because the rest of the checks require more time
[...Array(parseInt(totalEntries)).keys()].forEach(entryNum => {
leftTextCheck(entryNum, leftSidePath, leftSideExpectedText);
});
// making the other 3 checks on the first X entries (longer time for each check)
deeperChcek(leftSidePath, rightSidePath, name, leftSideExpectedText, rightSideExpectedText, entriesForDeeperCheck);
// reloading then waiting for the entries number to load
resizeToNormalMizu();
cy.reload();
cy.get('#total-entries', {timeout: refreshWaitTimeout}).should('have.text', totalEntries);
});
});
}
function deeperChcek(leftSidePath, rightSidePath, filterName, leftSideExpectedText, rightSideExpectedText, entriesNumToCheck) {
[...Array(entriesNumToCheck).keys()].forEach(entryNum => {
leftOnHoverCheck(entryNum, leftSidePath, filterName);
cy.get(`#list #entry-${entryNum}`).click();
rightTextCheck(rightSidePath, rightSideExpectedText);
rightOnHoverCheck(rightSidePath, filterName);
checkRightSideResponseBody();
});
}
function leftTextCheck(entryNum, path, expectedText) {
cy.get(`#list #entry-${entryNum} ${path}`).invoke('text').should('eq', expectedText);
}
function leftOnHoverCheck(entryNum, path, filterName) {
cy.get(`#list #entry-${entryNum} ${path}`).trigger('mouseover');
cy.get(`#list #entry-${entryNum} .Queryable-Tooltip`).should('have.text', filterName);
}
function rightTextCheck(path, expectedText) {
cy.get(`.TrafficPage-Container > :nth-child(2) ${path}`).should('have.text', expectedText);
}
function rightOnHoverCheck(path, expectedText) {
cy.get(`.TrafficPage-Container > :nth-child(2) ${path}`).trigger('mouseover');
cy.get(`.TrafficPage-Container > :nth-child(2) .Queryable-Tooltip`).should('have.text', expectedText);
}
function checkRightSideResponseBody() {
cy.contains('Response').click();
clickCheckbox('Decode Base64');
cy.get(`${bodyJsonClass}`).then(value => {
const encodedBody = value.text();
cy.log(encodedBody);
const decodedBody = atob(encodedBody);
const responseBody = JSON.parse(decodedBody);
const expectdJsonBody = {
args: RegExp({}),
url: RegExp('http://.*/get'),
headers: {
"User-Agent": RegExp('[REDACTED]'),
"Accept-Encoding": RegExp('gzip'),
"X-Forwarded-Uri": RegExp('/api/v1/namespaces/.*/services/.*/proxy/get')
}
};
expect(responseBody.args).to.match(expectdJsonBody.args);
expect(responseBody.url).to.match(expectdJsonBody.url);
expect(responseBody.headers['User-Agent']).to.match(expectdJsonBody.headers['User-Agent']);
expect(responseBody.headers['Accept-Encoding']).to.match(expectdJsonBody.headers['Accept-Encoding']);
expect(responseBody.headers['X-Forwarded-Uri']).to.match(expectdJsonBody.headers['X-Forwarded-Uri']);
cy.get(`${bodyJsonClass}`).should('have.text', encodedBody);
clickCheckbox('Decode Base64');
cy.get(`${bodyJsonClass} > `).its('length').should('be.gt', 1).then(linesNum => {
cy.get(`${bodyJsonClass} > >`).its('length').should('be.gt', linesNum).then(jsonItemsNum => {
checkPrettyAndLineNums(jsonItemsNum, decodedBody);
clickCheckbox('Line numbers');
checkPrettyOrNothing(jsonItemsNum, decodedBody);
clickCheckbox('Pretty');
checkPrettyOrNothing(jsonItemsNum, decodedBody);
clickCheckbox('Line numbers');
checkOnlyLineNumberes(jsonItemsNum, decodedBody);
});
});
});
}
function clickCheckbox(type) {
cy.contains(`${type}`).prev().children().click();
}
function checkPrettyAndLineNums(jsonItemsLen, decodedBody) {
decodedBody = decodedBody.replaceAll(' ', '');
cy.get(`${bodyJsonClass} >`).then(elements => {
const lines = Object.values(elements);
lines.forEach((line, index) => {
if (line.getAttribute) {
const cleanLine = getCleanLine(line);
const currentLineFromDecodedText = decodedBody.substring(0, cleanLine.length);
expect(cleanLine).to.equal(currentLineFromDecodedText, `expected the text in line number ${index + 1} to match the text that generated by the base64 decoding`)
decodedBody = decodedBody.substring(cleanLine.length);
}
});
});
}
function getCleanLine(lineElement) {
return (lineElement.innerText.substring(0, lineElement.innerText.length - 1)).replaceAll(' ', '');
}
function checkPrettyOrNothing(jsonItems, decodedBody) {
cy.get(`${bodyJsonClass} > `).should('have.length', jsonItems).then(text => {
const json = text.text();
expect(json).to.equal(decodedBody);
});
}
function checkOnlyLineNumberes(jsonItems, decodedText) {
cy.get(`${bodyJsonClass} >`).should('have.length', 1).and('have.text', decodedText);
cy.get(`${bodyJsonClass} > >`).should('have.length', jsonItems)
}

View File

@@ -62,35 +62,7 @@ func TestTap(t *testing.T) {
}
}
entriesCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entries, err := getDBEntries(timestamp, entriesCount, 1*time.Second)
if err != nil {
return err
}
err = checkEntriesAtLeast(entries, 1)
if err != nil {
return err
}
entry := entries[0]
entryUrl := fmt.Sprintf("%v/entries/%v", apiServerUrl, entry["id"])
requestResult, requestErr := executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
if requestResult == nil {
return fmt.Errorf("unexpected nil entry result")
}
return nil
}
if err := retriesExecute(shortRetriesCount, entriesCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/UiTest.js\"")
})
}
}
@@ -138,7 +110,7 @@ func TestTapGuiPort(t *testing.T) {
return
}
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/GuiPort.js\" --env port=%d --config-file cypress/integration/configurations/Default.json", guiPort))
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/GuiPort.js\" --env port=%d", guiPort))
})
}
}
@@ -184,7 +156,7 @@ func TestTapAllNamespaces(t *testing.T) {
return
}
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/MultipleNamespaces.js\" --env name1=%v,name2=%v,name3=%v,namespace1=%v,namespace2=%v,namespace3=%v --config-file cypress/integration/configurations/Default.json",
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/MultipleNamespaces.js\" --env name1=%v,name2=%v,name3=%v,namespace1=%v,namespace2=%v,namespace3=%v",
expectedPods[0].Name, expectedPods[1].Name, expectedPods[2].Name, expectedPods[0].Namespace, expectedPods[1].Namespace, expectedPods[2].Namespace))
}
@@ -233,7 +205,7 @@ func TestTapMultipleNamespaces(t *testing.T) {
return
}
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/MultipleNamespaces.js\" --env name1=%v,name2=%v,name3=%v,namespace1=%v,namespace2=%v,namespace3=%v --config-file cypress/integration/configurations/Default.json",
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/MultipleNamespaces.js\" --env name1=%v,name2=%v,name3=%v,namespace1=%v,namespace2=%v,namespace3=%v",
expectedPods[0].Name, expectedPods[1].Name, expectedPods[2].Name, expectedPods[0].Namespace, expectedPods[1].Namespace, expectedPods[2].Namespace))
}
@@ -279,7 +251,7 @@ func TestTapRegex(t *testing.T) {
return
}
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/Regex.js\" --env name=%v,namespace=%v --config-file cypress/integration/configurations/Default.json",
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/Regex.js\" --env name=%v,namespace=%v",
expectedPods[0].Name, expectedPods[0].Namespace))
}
@@ -377,7 +349,7 @@ func TestTapRedact(t *testing.T) {
}
}
runCypressTests(t, fmt.Sprintf("npx cypress run --spec \"cypress/integration/tests/Redact.js\" --config-file cypress/integration/configurations/Default.json"))
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/Redact.js\"")
}
func TestTapNoRedact(t *testing.T) {
@@ -429,7 +401,7 @@ func TestTapNoRedact(t *testing.T) {
}
}
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/NoRedact.js\" --config-file cypress/integration/configurations/Default.json")
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/NoRedact.js\"")
}
func TestTapRegexMasking(t *testing.T) {
@@ -480,7 +452,7 @@ func TestTapRegexMasking(t *testing.T) {
}
}
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/RegexMasking.js\" --config-file cypress/integration/configurations/Default.json")
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/RegexMasking.js\"")
}
@@ -542,7 +514,7 @@ func TestTapIgnoredUserAgents(t *testing.T) {
}
}
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/IgnoredUserAgents.js\" --config-file cypress/integration/configurations/HugeMizu.json")
runCypressTests(t, "npx cypress run --spec \"cypress/integration/tests/IgnoredUserAgents.js\"")
}
func TestTapDumpLogs(t *testing.T) {

View File

@@ -3,7 +3,6 @@ package acceptanceTests
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
@@ -11,12 +10,10 @@ import (
"os/exec"
"path"
"strings"
"sync"
"syscall"
"testing"
"time"
"github.com/gorilla/websocket"
"github.com/up9inc/mizu/shared"
)
@@ -28,7 +25,6 @@ const (
defaultServiceName = "httpbin"
defaultEntriesCount = 50
waitAfterTapPodsReady = 3 * time.Second
cleanCommandTimeout = 1 * time.Minute
)
type PodDescriptor struct {
@@ -36,18 +32,6 @@ type PodDescriptor struct {
Namespace string
}
func isPodDescriptorInPodArray(pods []map[string]interface{}, podDescriptor PodDescriptor) bool {
for _, pod := range pods {
podNamespace := pod["namespace"].(string)
podName := pod["name"].(string)
if podDescriptor.Namespace == podNamespace && strings.Contains(podName, podDescriptor.Name) {
return true
}
}
return false
}
func getCliPath() (string, error) {
dir, filePathErr := os.Getwd()
if filePathErr != nil {
@@ -84,10 +68,6 @@ func getApiServerUrl(port uint16) string {
return fmt.Sprintf("http://localhost:%v", port)
}
func getWebSocketUrl(port uint16) string {
return fmt.Sprintf("ws://localhost:%v/ws", port)
}
func getDefaultCommandArgs() []string {
setFlag := "--set"
telemetry := "telemetry=false"
@@ -130,20 +110,6 @@ func getDefaultConfigCommandArgs() []string {
return append([]string{configCommand}, defaultCmdArgs...)
}
func getDefaultCleanCommandArgs() []string {
cleanCommand := "clean"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{cleanCommand}, defaultCmdArgs...)
}
func getDefaultViewCommandArgs() []string {
viewCommand := "view"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{viewCommand}, defaultCmdArgs...)
}
func runCypressTests(t *testing.T, cypressRunCmd string) {
cypressCmd := exec.Command("bash", "-c", cypressRunCmd)
t.Logf("running command: %v", cypressCmd.String())
@@ -268,36 +234,6 @@ func executeHttpPostRequestWithHeaders(url string, headers map[string]string, bo
return executeHttpRequest(response, requestErr)
}
func runMizuClean() error {
cliPath, err := getCliPath()
if err != nil {
return err
}
cleanCmdArgs := getDefaultCleanCommandArgs()
cleanCmd := exec.Command(cliPath, cleanCmdArgs...)
commandDone := make(chan error)
go func() {
if err := cleanCmd.Run(); err != nil {
commandDone <- err
}
commandDone <- nil
}()
select {
case err = <-commandDone:
if err != nil {
return err
}
case <-time.After(cleanCommandTimeout):
return errors.New("clean command timed out")
}
return nil
}
func cleanupCommand(cmd *exec.Cmd) error {
if err := cmd.Process.Signal(syscall.SIGQUIT); err != nil {
return err
@@ -310,17 +246,6 @@ func cleanupCommand(cmd *exec.Cmd) error {
return nil
}
func getPods(tapStatusInterface interface{}) ([]map[string]interface{}, error) {
tapPodsInterface := tapStatusInterface.([]interface{})
var pods []map[string]interface{}
for _, podInterface := range tapPodsInterface {
pods = append(pods, podInterface.(map[string]interface{}))
}
return pods, nil
}
func getLogsPath() (string, error) {
dir, filePathErr := os.Getwd()
if filePathErr != nil {
@@ -331,77 +256,6 @@ func getLogsPath() (string, error) {
return logsPath, nil
}
// waitTimeout waits for the waitgroup for the specified max timeout.
// Returns true if waiting timed out.
func waitTimeout(wg *sync.WaitGroup, timeout time.Duration) bool {
channel := make(chan struct{})
go func() {
defer close(channel)
wg.Wait()
}()
select {
case <-channel:
return false // completed normally
case <-time.After(timeout):
return true // timed out
}
}
// checkEntriesAtLeast checks whether the number of entries greater than or equal to n
func checkEntriesAtLeast(entries []map[string]interface{}, n int) error {
if len(entries) < n {
return fmt.Errorf("Unexpected entries result - Expected more than %d entries", n-1)
}
return nil
}
// getDBEntries retrieves the entries from the database before the given timestamp.
// Also limits the results according to the limit parameter.
// Timeout for the WebSocket connection is defined by the timeout parameter.
func getDBEntries(timestamp int64, limit int, timeout time.Duration) (entries []map[string]interface{}, err error) {
query := fmt.Sprintf("timestamp < %d and limit(%d)", timestamp, limit)
webSocketUrl := getWebSocketUrl(defaultApiServerPort)
var connection *websocket.Conn
connection, _, err = websocket.DefaultDialer.Dial(webSocketUrl, nil)
if err != nil {
return
}
defer connection.Close()
handleWSConnection := func(wg *sync.WaitGroup) {
defer wg.Done()
for {
_, message, err := connection.ReadMessage()
if err != nil {
return
}
var data map[string]interface{}
if err = json.Unmarshal([]byte(message), &data); err != nil {
return
}
if data["messageType"] == "entry" {
entries = append(entries, data)
}
}
}
err = connection.WriteMessage(websocket.TextMessage, []byte(query))
if err != nil {
return
}
var wg sync.WaitGroup
go handleWSConnection(&wg)
wg.Add(1)
waitTimeout(&wg, timeout)
return
}
func Contains(slice []string, containsValue string) bool {
for _, sliceValue := range slice {
if sliceValue == containsValue {

View File

@@ -1,20 +0,0 @@
# mizu agent
Agent for MIZU (API server and tapper)
Basic APIs:
* /stats - retrieve statistics of collected data
* /viewer - web ui
## Remote Debugging
### Setup remote debugging
1. Run `go get github.com/go-delve/delve/cmd/dlv`
2. Create a "Go Remote" run/debug configuration in Intellij, set to localhost:2345
3. Build and push a debug image using
`docker build . -t gcr.io/up9-docker-hub/mizu/debug:latest -f debug.Dockerfile && docker push gcr.io/up9-docker-hub/mizu/debug:latest`
### Connecting
1. Start mizu using the cli with the debug
image `mizu tap --set agent-image=gcr.io/up9-docker-hub/mizu/debug:latest {tapped_pod_name}`
2. Forward the debug port using `kubectl port-forward -n default mizu-api-server 2345:2345`
3. Run the run/debug configuration you've created earlier in Intellij.
<small>Do note that dlv won't start the api until a debugger connects to it.</small>

View File

@@ -1,4 +1,4 @@
module mizuserver
module github.com/up9inc/mizu/agent
go 1.16
@@ -6,13 +6,13 @@ require (
github.com/antelman107/net-wait-go v0.0.0-20210623112055-cf684aebda7b
github.com/chanced/openapi v0.0.6
github.com/djherbis/atime v1.0.0
github.com/elastic/go-elasticsearch/v7 v7.16.0
github.com/getkin/kin-openapi v0.76.0
github.com/gin-contrib/static v0.0.1
github.com/gin-gonic/gin v1.7.7
github.com/go-playground/locales v0.13.0
github.com/go-playground/universal-translator v0.17.0
github.com/go-playground/validator/v10 v10.5.0
github.com/google/martian v2.1.0+incompatible
github.com/google/uuid v1.1.2
github.com/gorilla/websocket v1.4.2
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
@@ -21,10 +21,14 @@ require (
github.com/ory/kratos-client-go v0.8.2-alpha.1
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/stretchr/testify v1.7.0
github.com/up9inc/basenine/client/go v0.0.0-20220110083745-04fbc6c2068d
github.com/up9inc/basenine/client/go v0.0.0-20220125035724-573fff0d5075
github.com/up9inc/mizu/shared v0.0.0
github.com/up9inc/mizu/tap v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
github.com/up9inc/mizu/tap/extensions/amqp v0.0.0
github.com/up9inc/mizu/tap/extensions/http v0.0.0
github.com/up9inc/mizu/tap/extensions/kafka v0.0.0
github.com/up9inc/mizu/tap/extensions/redis v0.0.0
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0
k8s.io/api v0.21.2
k8s.io/apimachinery v0.21.2
@@ -36,3 +40,11 @@ replace github.com/up9inc/mizu/shared v0.0.0 => ../shared
replace github.com/up9inc/mizu/tap v0.0.0 => ../tap
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../tap/api
replace github.com/up9inc/mizu/tap/extensions/amqp v0.0.0 => ../tap/extensions/amqp
replace github.com/up9inc/mizu/tap/extensions/http v0.0.0 => ../tap/extensions/http
replace github.com/up9inc/mizu/tap/extensions/kafka v0.0.0 => ../tap/extensions/kafka
replace github.com/up9inc/mizu/tap/extensions/redis v0.0.0 => ../tap/extensions/redis

View File

@@ -77,6 +77,8 @@ github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535/go.mod h1:o
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef h1:46PFijGLmAjMPwCCCo7Jf0W6f9slllCkkv7vyc1yOSg=
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs=
github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
@@ -122,6 +124,10 @@ github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4Kfc
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8/yCZMuEPMUDHG0CW/brkkEp8mzqk2+ODEitlw=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/elastic/go-elasticsearch/v7 v7.16.0 h1:GHsxDFXIAlhSleXun4kwA89P7kQFADRChqvgOPeYP5A=
github.com/elastic/go-elasticsearch/v7 v7.16.0/go.mod h1:OJ4wdbtDNk5g503kvlHLyErCgQwwzmDtaFC4XyOxXA4=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153 h1:yUdfgN0XgIJw7foRItutHYUIhlcKzcSf5vDpdhQAKTc=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
@@ -136,10 +142,13 @@ github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
github.com/evanphx/json-patch/v5 v5.6.0 h1:b91NhWfaz02IuVxO9faSllyAtNXHMPkC5J8sJCLunww=
github.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
github.com/fatih/camelcase v1.0.0 h1:hxNvNX/xYBp0ovncs8WyWZrOrpBNub/JfaMvbURyft8=
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible h1:TcekIExNqud5crz4xD2pavyTgWiPvpYe4Xau31I0PRk=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/frankban/quicktest v1.11.3 h1:8sXhOn0uLys67V8EsXLc6eszDs8VXWxL3iRvebPhedY=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fvbommel/sortorder v1.0.1/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
@@ -337,6 +346,7 @@ github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QD
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golangplus/testing v0.0.0-20180327235837-af21d9c3145e/go.mod h1:0AA//k/eakGydO4jKRoRL2j92ZKSzTgj9tclaCrvXHk=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
@@ -436,11 +446,16 @@ github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQL
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.9.8/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.14.1 h1:hLQYb23E8/fO+1u53d02A97a8UnsddcvYzq4ERRU4ds=
github.com/klauspost/compress v1.14.1/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
@@ -502,6 +517,8 @@ github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/ohler55/ojg v1.12.12 h1:hepbQFn7GHAecTPmwS3j5dCiOLsOpzPLvhiqnlAVAoE=
github.com/ohler55/ojg v1.12.12/go.mod h1:LBbIVRAgoFbYBXQhRhuEpaJIqq+goSO63/FQ+nyJU88=
github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.4/go.mod h1:zq6QwlOf5SlnkVbMSr5EoBv3636FWnp+qbPhuoO21uA=
@@ -529,6 +546,8 @@ github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/9
github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4 v2.6.0+incompatible h1:Ix9yFKn1nSPBLFl/yZknTp8TU5G4Ps0JDmguYK6iH1A=
github.com/pierrec/lz4 v2.6.0+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
@@ -564,6 +583,8 @@ github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb
github.com/santhosh-tekuri/jsonschema/v5 v5.0.0 h1:TToq11gyfNlrMFZiYujSekIsPd9AmsA2Bj/iv+s4JHE=
github.com/santhosh-tekuri/jsonschema/v5 v5.0.0/go.mod h1:FKdcjfQW6rpZSnxxUvEA5H/cDPdvJ/SZJQLWWXWGrZ0=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/segmentio/kafka-go v0.4.27 h1:sIhEozeL/TLN2mZ5dkG462vcGEWYKS+u31sXPjKhAM4=
github.com/segmentio/kafka-go v0.4.27/go.mod h1:XzMcoMjSzDGHcIwpWUI7GB43iKZ2fTVmryPSGLf/MPg=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
@@ -615,8 +636,8 @@ github.com/ugorji/go v1.1.7 h1:/68gy2h+1mWMrwZFeD1kQialdSzAb432dtpeJ42ovdo=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v1.1.7 h1:2SvQaVZ1ouYrrKKwoSk2pzd4A9evlKJb9oTL+OaLUSs=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/up9inc/basenine/client/go v0.0.0-20220110083745-04fbc6c2068d h1:WTz53dcfqCIWZpZLQoHbIcNc21s0ZHEZH7EqMPp99qQ=
github.com/up9inc/basenine/client/go v0.0.0-20220110083745-04fbc6c2068d/go.mod h1:SvJGPoa/6erhUQV7kvHBwM/0x5LyO6XaG2lUaCaKiUI=
github.com/up9inc/basenine/client/go v0.0.0-20220125035724-573fff0d5075 h1:Tp0yckZkvb8BNC+bB4B+fBuQdomTner6bOte1S8SYCo=
github.com/up9inc/basenine/client/go v0.0.0-20220125035724-573fff0d5075/go.mod h1:SvJGPoa/6erhUQV7kvHBwM/0x5LyO6XaG2lUaCaKiUI=
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f h1:p4VB7kIXpOQvVn1ZaTIVp+3vuYAXFe3OJEvjbUYJLaA=
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
@@ -627,6 +648,7 @@ github.com/xdg-go/scram v1.0.2/go.mod h1:1WAq6h33pAW+iRreB34OORO2Nf7qel3VV3fjBj+
github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM=
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I=
github.com/xdg/stringprep v0.0.0-20180714160509-73f8eece6fdc/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
@@ -662,6 +684,7 @@ golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnf
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190422162423-af44ce270edf/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190506204251-e1dfcc566284/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=

View File

@@ -6,28 +6,30 @@ import (
"flag"
"fmt"
"io/ioutil"
"mizuserver/pkg/api"
"mizuserver/pkg/config"
"mizuserver/pkg/controllers"
"mizuserver/pkg/middlewares"
"mizuserver/pkg/models"
"mizuserver/pkg/oas"
"mizuserver/pkg/routes"
"mizuserver/pkg/servicemap"
"mizuserver/pkg/up9"
"mizuserver/pkg/utils"
"net/http"
"os"
"os/signal"
"path"
"path/filepath"
"plugin"
"sort"
"strconv"
"strings"
"syscall"
"time"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/oas"
"github.com/up9inc/mizu/agent/pkg/routes"
"github.com/up9inc/mizu/agent/pkg/servicemap"
"github.com/up9inc/mizu/agent/pkg/up9"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/up9inc/mizu/agent/pkg/elastic"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/api"
"github.com/up9inc/mizu/agent/pkg/config"
v1 "k8s.io/api/core/v1"
"github.com/antelman107/net-wait-go/wait"
@@ -40,6 +42,11 @@ import (
"github.com/up9inc/mizu/shared/logger"
"github.com/up9inc/mizu/tap"
tapApi "github.com/up9inc/mizu/tap/api"
amqpExt "github.com/up9inc/mizu/tap/extensions/amqp"
httpExt "github.com/up9inc/mizu/tap/extensions/http"
kafkaExt "github.com/up9inc/mizu/tap/extensions/kafka"
redisExt "github.com/up9inc/mizu/tap/extensions/redis"
)
var tapperMode = flag.Bool("tap", false, "Run in tapper mode without API")
@@ -59,7 +66,6 @@ const (
socketConnectionRetries = 30
socketConnectionRetryDelay = time.Second * 2
socketHandshakeTimeout = time.Second * 2
uiIndexPath = "./site/index.html"
)
func main() {
@@ -157,6 +163,7 @@ func enableExpFeatureIfNeeded() {
if config.Config.ServiceMap {
servicemap.GetInstance().SetConfig(config.Config)
}
elastic.GetInstance().Configure(config.Config.Elastic)
}
func configureBasenineServer(host string, port string) {
@@ -165,7 +172,7 @@ func configureBasenineServer(host string, port string) {
wait.WithWait(200*time.Millisecond),
wait.WithBreak(50*time.Millisecond),
wait.WithDeadline(5*time.Second),
wait.WithDebug(true),
wait.WithDebug(config.Config.LogLevel == logging.DEBUG),
).Do([]string{fmt.Sprintf("%s:%s", host, port)}) {
logger.Log.Panicf("Basenine is not available!")
}
@@ -189,36 +196,36 @@ func configureBasenineServer(host string, port string) {
}
func loadExtensions() {
dir, _ := filepath.Abs(filepath.Dir(os.Args[0]))
extensionsDir := path.Join(dir, "./extensions/")
files, err := ioutil.ReadDir(extensionsDir)
if err != nil {
logger.Log.Fatal(err)
}
extensions = make([]*tapApi.Extension, len(files))
extensions = make([]*tapApi.Extension, 4)
extensionsMap = make(map[string]*tapApi.Extension)
for i, file := range files {
filename := file.Name()
logger.Log.Infof("Loading extension: %s", filename)
extension := &tapApi.Extension{
Path: path.Join(extensionsDir, filename),
}
plug, _ := plugin.Open(extension.Path)
extension.Plug = plug
symDissector, err := plug.Lookup("Dissector")
var dissector tapApi.Dissector
var ok bool
dissector, ok = symDissector.(tapApi.Dissector)
if err != nil || !ok {
panic(fmt.Sprintf("Failed to load the extension: %s", extension.Path))
}
dissector.Register(extension)
extension.Dissector = dissector
extensions[i] = extension
extensionsMap[extension.Protocol.Name] = extension
}
extensionAmqp := &tapApi.Extension{}
dissectorAmqp := amqpExt.NewDissector()
dissectorAmqp.Register(extensionAmqp)
extensionAmqp.Dissector = dissectorAmqp
extensions[0] = extensionAmqp
extensionsMap[extensionAmqp.Protocol.Name] = extensionAmqp
extensionHttp := &tapApi.Extension{}
dissectorHttp := httpExt.NewDissector()
dissectorHttp.Register(extensionHttp)
extensionHttp.Dissector = dissectorHttp
extensions[1] = extensionHttp
extensionsMap[extensionHttp.Protocol.Name] = extensionHttp
extensionKafka := &tapApi.Extension{}
dissectorKafka := kafkaExt.NewDissector()
dissectorKafka.Register(extensionKafka)
extensionKafka.Dissector = dissectorKafka
extensions[2] = extensionKafka
extensionsMap[extensionKafka.Protocol.Name] = extensionKafka
extensionRedis := &tapApi.Extension{}
dissectorRedis := redisExt.NewDissector()
dissectorRedis.Register(extensionRedis)
extensionRedis.Dissector = dissectorRedis
extensions[3] = extensionRedis
extensionsMap[extensionRedis.Protocol.Name] = extensionRedis
sort.Slice(extensions, func(i, j int) bool {
return extensions[i].Protocol.Priority < extensions[j].Protocol.Priority
@@ -244,16 +251,23 @@ func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
app.Use(DisableRootStaticCache())
if err := setUIFlags(); err != nil {
logger.Log.Errorf("Error setting ui mode, err: %v", err)
var staticFolder string
if config.Config.StandaloneMode {
staticFolder = "./site-standalone"
} else {
staticFolder = "./site"
}
if config.Config.StandaloneMode {
app.Use(static.ServeRoot("/", "./site-standalone"))
} else {
app.Use(static.ServeRoot("/", "./site"))
indexStaticFile := staticFolder + "/index.html"
if err := setUIFlags(indexStaticFile); err != nil {
logger.Log.Errorf("Error setting ui flags, err: %v", err)
}
app.Use(static.ServeRoot("/", staticFolder))
app.NoRoute(func(c *gin.Context) {
c.File(indexStaticFile)
})
app.Use(middlewares.CORSMiddleware()) // This has to be called after the static middleware, does not work if its called before
api.WebSocketRoutes(app, &eventHandlers, startTime)
@@ -274,7 +288,6 @@ func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
routes.EntriesRoutes(app)
routes.MetadataRoutes(app)
routes.StatusRoutes(app)
routes.NotFoundRoute(app)
utils.StartServer(app)
}
@@ -289,7 +302,7 @@ func DisableRootStaticCache() gin.HandlerFunc {
}
}
func setUIFlags() error {
func setUIFlags(uiIndexPath string) error {
read, err := ioutil.ReadFile(uiIndexPath)
if err != nil {
return err

View File

@@ -5,26 +5,28 @@ import (
"context"
"encoding/json"
"fmt"
"mizuserver/pkg/holder"
"mizuserver/pkg/providers"
"os"
"path"
"sort"
"strings"
"time"
"mizuserver/pkg/servicemap"
"github.com/up9inc/mizu/agent/pkg/elastic"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/holder"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/servicemap"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/oas"
"github.com/up9inc/mizu/agent/pkg/resolver"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
"mizuserver/pkg/models"
"mizuserver/pkg/oas"
"mizuserver/pkg/resolver"
"mizuserver/pkg/utils"
basenine "github.com/up9inc/basenine/client/go"
)
@@ -41,10 +43,8 @@ func StartResolving(namespace string) {
res.Start(ctx)
go func() {
for {
select {
case err := <-errOut:
logger.Log.Infof("name resolving error %s", err)
}
err := <-errOut
logger.Log.Infof("name resolving error %s", err)
}
}()
@@ -66,7 +66,7 @@ func startReadingFiles(workingDir string) {
return
}
for true {
for {
dir, _ := os.Open(workingDir)
dirFiles, _ := dir.Readdir(-1)
@@ -117,15 +117,15 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
}
for item := range outputItems {
providers.EntryAdded()
extension := extensionsMap[item.Protocol.Name]
resolvedSource, resolvedDestionation := resolveIP(item.ConnectionInfo)
mizuEntry := extension.Dissector.Analyze(item, resolvedSource, resolvedDestionation)
if extension.Protocol.Name == "http" {
if !disableOASValidation {
var httpPair tapApi.HTTPRequestResponsePair
json.Unmarshal([]byte(mizuEntry.HTTPPair), &httpPair)
if err := json.Unmarshal([]byte(mizuEntry.HTTPPair), &httpPair); err != nil {
logger.Log.Error(err)
}
contract := handleOAS(ctx, doc, router, httpPair.Request.Payload.RawRequest, httpPair.Response.Payload.RawResponse, contractContent)
mizuEntry.ContractStatus = contract.Status
@@ -134,7 +134,7 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
mizuEntry.ContractContent = contract.Content
}
harEntry, err := utils.NewEntry(mizuEntry.Request, mizuEntry.Response, mizuEntry.StartTime, mizuEntry.ElapsedTime)
harEntry, err := har.NewEntry(mizuEntry.Request, mizuEntry.Response, mizuEntry.StartTime, mizuEntry.ElapsedTime)
if err == nil {
rules, _, _ := models.RunValidationRulesState(*harEntry, mizuEntry.Destination.Name)
mizuEntry.Rules = rules
@@ -147,9 +147,13 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
if err != nil {
panic(err)
}
providers.EntryAdded(len(data))
connection.SendText(string(data))
servicemap.GetInstance().NewTCPEntry(mizuEntry.Source, mizuEntry.Destination, &item.Protocol)
elastic.GetInstance().PushEntry(mizuEntry)
}
}

View File

@@ -2,18 +2,17 @@ package api
import (
"encoding/json"
"errors"
"fmt"
"mizuserver/pkg/models"
"net/http"
"sync"
"time"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/debounce"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
)
@@ -42,7 +41,7 @@ var connectedWebsocketIdCounter = 0
func init() {
websocketUpgrader.CheckOrigin = func(r *http.Request) bool { return true } // like cors for web socket
connectedWebsockets = make(map[int]*SocketConnection, 0)
connectedWebsockets = make(map[int]*SocketConnection)
}
func WebSocketRoutes(app *gin.Engine, eventHandlers EventHandlers, startTime int64) {
@@ -94,7 +93,10 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
eventHandlers.WebSocketConnect(socketId, isTapper)
startTimeBytes, _ := models.CreateWebsocketStartTimeMessage(startTime)
SendToSocket(socketId, startTimeBytes)
if err = SendToSocket(socketId, startTimeBytes); err != nil {
logger.Log.Error(err)
}
for {
_, msg, err := ws.ReadMessage()
@@ -117,7 +119,9 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
AutoClose: 5000,
Text: fmt.Sprintf("Syntax error: %s", err.Error()),
})
SendToSocket(socketId, toastBytes)
if err := SendToSocket(socketId, toastBytes); err != nil {
logger.Log.Error(err)
}
break
}
@@ -137,7 +141,9 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
base := tapApi.Summarize(entry)
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(base)
SendToSocket(socketId, baseEntryBytes)
if err := SendToSocket(socketId, baseEntryBytes); err != nil {
logger.Log.Error(err)
}
}
}
@@ -156,7 +162,9 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
}
metadataBytes, _ := models.CreateWebsocketQueryMetadataMessage(metadata)
SendToSocket(socketId, metadataBytes)
if err := SendToSocket(socketId, metadataBytes); err != nil {
logger.Log.Error(err)
}
}
}
@@ -183,14 +191,10 @@ func socketCleanup(socketId int, socketConnection *SocketConnection) {
socketConnection.eventHandlers.WebSocketDisconnect(socketId, socketConnection.isTapper)
}
var db = debounce.NewDebouncer(time.Second*5, func() {
logger.Log.Error("Successfully sent to socket")
})
func SendToSocket(socketId int, message []byte) error {
socketObj := connectedWebsockets[socketId]
if socketObj == nil {
return errors.New("Socket is disconnected")
return fmt.Errorf("Socket %v is disconnected", socketId)
}
var sent = false
@@ -204,7 +208,10 @@ func SendToSocket(socketId int, message []byte) error {
socketObj.lock.Lock() // gorilla socket panics from concurrent writes to a single socket
err := socketObj.connection.WriteMessage(1, message)
socketObj.lock.Unlock()
sent = true
return err
if err != nil {
return fmt.Errorf("Failed to write message to socket %v, err: %w", socketId, err)
}
return nil
}

View File

@@ -3,12 +3,13 @@ package api
import (
"encoding/json"
"fmt"
"mizuserver/pkg/models"
"mizuserver/pkg/providers"
"mizuserver/pkg/providers/tappers"
"mizuserver/pkg/up9"
"sync"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/providers/tappers"
"github.com/up9inc/mizu/agent/pkg/up9"
tapApi "github.com/up9inc/mizu/tap/api"
"github.com/up9inc/mizu/shared"
@@ -54,9 +55,8 @@ func (h *RoutesEventHandlers) WebSocketDisconnect(socketId int, isTapper bool) {
func BroadcastToBrowserClients(message []byte) {
for _, socketId := range browserClientSocketUUIDs {
go func(socketId int) {
err := SendToSocket(socketId, message)
if err != nil {
logger.Log.Errorf("error sending message to socket ID %d: %v", socketId, err)
if err := SendToSocket(socketId, message); err != nil {
logger.Log.Error(err)
}
}(socketId)
}

View File

@@ -2,21 +2,22 @@ package controllers
import (
"context"
"net/http"
"regexp"
"time"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/config"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/providers/tapConfig"
"github.com/up9inc/mizu/agent/pkg/providers/tappedPods"
"github.com/up9inc/mizu/agent/pkg/providers/tappers"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/kubernetes"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
v1 "k8s.io/api/core/v1"
"mizuserver/pkg/config"
"mizuserver/pkg/models"
"mizuserver/pkg/providers"
"mizuserver/pkg/providers/tapConfig"
"mizuserver/pkg/providers/tappedPods"
"mizuserver/pkg/providers/tappers"
"net/http"
"regexp"
"time"
)
var cancelTapperSyncer context.CancelFunc

View File

@@ -2,13 +2,14 @@ package controllers
import (
"encoding/json"
"mizuserver/pkg/models"
"mizuserver/pkg/utils"
"mizuserver/pkg/validation"
"net/http"
"strconv"
"time"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/validation"
"github.com/gin-gonic/gin"
basenine "github.com/up9inc/basenine/client/go"
@@ -26,7 +27,7 @@ func InitExtensionsMap(ref map[string]*tapApi.Extension) {
func Error(c *gin.Context, err error) bool {
if err != nil {
logger.Log.Errorf("Error getting entry: %v", err)
c.Error(err)
_ = c.Error(err)
c.AbortWithStatusJSON(http.StatusInternalServerError, gin.H{
"error": true,
"type": "error",
@@ -127,11 +128,13 @@ func GetEntry(c *gin.Context) {
var rules []map[string]interface{}
var isRulesEnabled bool
if entry.Protocol.Name == "http" {
harEntry, _ := utils.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
harEntry, _ := har.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
_, rulesMatched, _isRulesEnabled := models.RunValidationRulesState(*harEntry, entry.Destination.Name)
isRulesEnabled = _isRulesEnabled
inrec, _ := json.Marshal(rulesMatched)
json.Unmarshal(inrec, &rules)
if err := json.Unmarshal(inrec, &rules); err != nil {
logger.Log.Error(err)
}
}
c.JSON(http.StatusOK, tapApi.EntryWrapper{

View File

@@ -1,9 +1,10 @@
package controllers
import (
"mizuserver/pkg/providers"
"net/http"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/shared/logger"
)

View File

@@ -1,10 +1,11 @@
package controllers
import (
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/shared"
"mizuserver/pkg/version"
"net/http"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/version"
"github.com/up9inc/mizu/shared"
)
func GetVersion(c *gin.Context) {

View File

@@ -1,11 +1,12 @@
package controllers
import (
"net/http"
"github.com/chanced/openapi"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/oas"
"github.com/up9inc/mizu/shared/logger"
"mizuserver/pkg/oas"
"net/http"
)
func GetOASServers(c *gin.Context) {

View File

@@ -1,10 +1,12 @@
package controllers
import (
"github.com/gin-gonic/gin"
"mizuserver/pkg/oas"
"net/http/httptest"
"testing"
"github.com/up9inc/mizu/agent/pkg/oas"
"github.com/gin-gonic/gin"
)
func TestGetOASServers(t *testing.T) {
@@ -15,7 +17,6 @@ func TestGetOASServers(t *testing.T) {
GetOASServers(c)
t.Logf("Written body: %s", recorder.Body.String())
return
}
func TestGetOASAllSpecs(t *testing.T) {
@@ -26,7 +27,6 @@ func TestGetOASAllSpecs(t *testing.T) {
GetOASAllSpecs(c)
t.Logf("Written body: %s", recorder.Body.String())
return
}
func TestGetOASSpec(t *testing.T) {
@@ -39,5 +39,4 @@ func TestGetOASSpec(t *testing.T) {
GetOASSpec(c)
t.Logf("Written body: %s", recorder.Body.String())
return
}

View File

@@ -1,9 +1,10 @@
package controllers
import (
"mizuserver/pkg/servicemap"
"net/http"
"github.com/up9inc/mizu/agent/pkg/servicemap"
"github.com/gin-gonic/gin"
)

View File

@@ -7,7 +7,7 @@ import (
"net/http/httptest"
"testing"
"mizuserver/pkg/servicemap"
"github.com/up9inc/mizu/agent/pkg/servicemap"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/suite"

View File

@@ -2,17 +2,18 @@ package controllers
import (
"encoding/json"
"net/http"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/api"
"github.com/up9inc/mizu/agent/pkg/holder"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/providers/tappedPods"
"github.com/up9inc/mizu/agent/pkg/providers/tappers"
"github.com/up9inc/mizu/agent/pkg/up9"
"github.com/up9inc/mizu/agent/pkg/validation"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
"mizuserver/pkg/api"
"mizuserver/pkg/holder"
"mizuserver/pkg/providers"
"mizuserver/pkg/providers/tappedPods"
"mizuserver/pkg/providers/tappers"
"mizuserver/pkg/up9"
"mizuserver/pkg/validation"
"net/http"
)
func HealthCheck(c *gin.Context) {

View File

@@ -1,7 +1,7 @@
package controllers
import (
"mizuserver/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/shared/logger"

View File

@@ -0,0 +1,120 @@
package elastic
import (
"bytes"
"crypto/tls"
"encoding/json"
"github.com/elastic/go-elasticsearch/v7"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
"github.com/up9inc/mizu/tap/api"
"net/http"
"sync"
"time"
)
type client struct {
es *elasticsearch.Client
index string
insertedCount int
}
var instance *client
var once sync.Once
func GetInstance() *client {
once.Do(func() {
instance = newClient()
})
return instance
}
func (client *client) Configure(config shared.ElasticConfig) {
if config.Url == "" || config.User == "" || config.Password == "" {
logger.Log.Infof("No elastic configuration was supplied, elastic exporter disabled")
return
}
transport := http.DefaultTransport
tlsClientConfig := &tls.Config{InsecureSkipVerify: true}
transport.(*http.Transport).TLSClientConfig = tlsClientConfig
cfg := elasticsearch.Config{
Addresses: []string{config.Url},
Username: config.User,
Password: config.Password,
Transport: transport,
}
es, err := elasticsearch.NewClient(cfg)
if err != nil {
logger.Log.Fatalf("Failed to initialize elastic client %v", err)
}
// Have the client instance return a response
res, err := es.Info()
if err != nil {
logger.Log.Fatalf("Elastic client.Info() ERROR: %v", err)
} else {
client.es = es
client.index = "mizu_traffic_http_" + time.Now().Format("2006_01_02_15_04")
client.insertedCount = 0
logger.Log.Infof("Elastic client configured, index: %s, cluster info: %v", client.index, res)
}
defer res.Body.Close()
}
func newClient() *client {
return &client{
es: nil,
index: "",
}
}
type httpEntry struct {
Source *api.TCP `json:"src"`
Destination *api.TCP `json:"dst"`
Outgoing bool `json:"outgoing"`
CreatedAt time.Time `json:"createdAt"`
Request map[string]interface{} `json:"request"`
Response map[string]interface{} `json:"response"`
Summary string `json:"summary"`
Method string `json:"method"`
Status int `json:"status"`
ElapsedTime int64 `json:"elapsedTime"`
Path string `json:"path"`
}
func (client *client) PushEntry(entry *api.Entry) {
if client.es == nil {
return
}
if entry.Protocol.Name != "http" {
return
}
entryToPush := httpEntry{
Source: entry.Source,
Destination: entry.Destination,
Outgoing: entry.Outgoing,
CreatedAt: entry.StartTime,
Request: entry.Request,
Response: entry.Response,
Summary: entry.Summary,
Method: entry.Method,
Status: entry.Status,
ElapsedTime: entry.ElapsedTime,
Path: entry.Path,
}
entryJson, err := json.Marshal(entryToPush)
if err != nil {
logger.Log.Errorf("json.Marshal ERROR: %v", err)
return
}
var buffer bytes.Buffer
buffer.WriteString(string(entryJson))
res, _ := client.es.Index(client.index, &buffer)
if res.StatusCode == 201 {
client.insertedCount += 1
}
}

375
agent/pkg/har/types.go Normal file
View File

@@ -0,0 +1,375 @@
package har
import (
"encoding/base64"
"github.com/up9inc/mizu/shared/logger"
"time"
"unicode/utf8"
)
/*
HTTP Archive (HAR) format
https://w3c.github.io/web-performance/specs/HAR/Overview.html
*/
// HAR is a container type for deserialization
type HAR struct {
Log Log `json:"log"`
}
// Log represents the root of the exported data. This object MUST be present and its name MUST be "log".
type Log struct {
// The object contains the following name/value pairs:
// Required. Version number of the format.
Version string `json:"version"`
// Required. An object of type creator that contains the name and version
// information of the log creator application.
Creator Creator `json:"creator"`
// Optional. An object of type browser that contains the name and version
// information of the user agent.
Browser Browser `json:"browser"`
// Optional. An array of objects of type page, each representing one exported
// (tracked) page. Leave out this field if the application does not support
// grouping by pages.
Pages []Page `json:"pages,omitempty"`
// Required. An array of objects of type entry, each representing one
// exported (tracked) HTTP request.
Entries []Entry `json:"entries"`
// Optional. A comment provided by the user or the application. Sorting
// entries by startedDateTime (starting from the oldest) is preferred way how
// to export data since it can make importing faster. However the reader
// application should always make sure the array is sorted (if required for
// the import).
Comment string `json:"comment"`
}
// Creator contains information about the log creator application
type Creator struct {
// Required. The name of the application that created the log.
Name string `json:"name"`
// Required. The version number of the application that created the log.
Version string `json:"version"`
// Optional. A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// Browser that created the log
type Browser struct {
// Required. The name of the browser that created the log.
Name string `json:"name"`
// Required. The version number of the browser that created the log.
Version string `json:"version"`
// Optional. A comment provided by the user or the browser.
Comment string `json:"comment"`
}
// Page object for every exported web page and one <entry> object for every HTTP request.
// In case when an HTTP trace tool isn't able to group requests by a page,
// the <pages> object is empty and individual requests doesn't have a parent page.
type Page struct {
/* There is one <page> object for every exported web page and one <entry>
object for every HTTP request. In case when an HTTP trace tool isn't able to
group requests by a page, the <pages> object is empty and individual
requests doesn't have a parent page.
*/
// Date and time stamp for the beginning of the page load
// (ISO 8601 YYYY-MM-DDThh:mm:ss.sTZD, e.g. 2009-07-24T19:20:30.45+01:00).
StartedDateTime string `json:"startedDateTime"`
// Unique identifier of a page within the . Entries use it to refer the parent page.
ID string `json:"id"`
// Page title.
Title string `json:"title"`
// Detailed timing info about page load.
PageTiming PageTiming `json:"pageTiming"`
// (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// PageTiming describes timings for various events (states) fired during the page load.
// All times are specified in milliseconds. If a time info is not available appropriate field is set to -1.
type PageTiming struct {
// Content of the page loaded. Number of milliseconds since page load started
// (page.startedDateTime). Use -1 if the timing does not apply to the current
// request.
// Depeding on the browser, onContentLoad property represents DOMContentLoad
// event or document.readyState == interactive.
OnContentLoad int `json:"onContentLoad"`
// Page is loaded (onLoad event fired). Number of milliseconds since page
// load started (page.startedDateTime). Use -1 if the timing does not apply
// to the current request.
OnLoad int `json:"onLoad"`
// (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment"`
}
// Entry is a unique, optional Reference to the parent page.
// Leave out this field if the application does not support grouping by pages.
type Entry struct {
Pageref string `json:"pageref,omitempty"`
// Date and time stamp of the request start
// (ISO 8601 YYYY-MM-DDThh:mm:ss.sTZD).
StartedDateTime string `json:"startedDateTime"`
// Total elapsed time of the request in milliseconds. This is the sum of all
// timings available in the timings object (i.e. not including -1 values) .
Time int `json:"time"`
// Detailed info about the request.
Request Request `json:"request"`
// Detailed info about the response.
Response Response `json:"response"`
// Info about cache usage.
Cache Cache `json:"cache"`
// Detailed timing info about request/response round trip.
PageTimings PageTimings `json:"pageTimings"`
// optional (new in 1.2) IP address of the server that was connected
// (result of DNS resolution).
ServerIPAddress string `json:"serverIPAddress,omitempty"`
// optional (new in 1.2) Unique ID of the parent TCP/IP connection, can be
// the client port number. Note that a port number doesn't have to be unique
// identifier in cases where the port is shared for more connections. If the
// port isn't available for the application, any other unique connection ID
// can be used instead (e.g. connection index). Leave out this field if the
// application doesn't support this info.
Connection string `json:"connection,omitempty"`
// (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// Request contains detailed info about performed request.
type Request struct {
// Request method (GET, POST, ...).
Method string `json:"method"`
// Absolute URL of the request (fragments are not included).
URL string `json:"url"`
// Request HTTP Version.
HTTPVersion string `json:"httpVersion"`
// List of cookie objects.
Cookies []Cookie `json:"cookies"`
// List of header objects.
Headers []NVP `json:"headers"`
// List of query parameter objects.
QueryString []NVP `json:"queryString"`
// Posted data.
PostData PostData `json:"postData"`
// Total number of bytes from the start of the HTTP request message until
// (and including) the double CRLF before the body. Set to -1 if the info
// is not available.
HeaderSize int `json:"headerSize"`
// Size of the request body (POST data payload) in bytes. Set to -1 if the
// info is not available.
BodySize int `json:"bodySize"`
// (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment"`
}
// Response contains detailed info about the response.
type Response struct {
// Response status.
Status int `json:"status"`
// Response status description.
StatusText string `json:"statusText"`
// Response HTTP Version.
HTTPVersion string `json:"httpVersion"`
// List of cookie objects.
Cookies []Cookie `json:"cookies"`
// List of header objects.
Headers []NVP `json:"headers"`
// Details about the response body.
Content Content `json:"content"`
// Redirection target URL from the Location response header.
RedirectURL string `json:"redirectURL"`
// Total number of bytes from the start of the HTTP response message until
// (and including) the double CRLF before the body. Set to -1 if the info is
// not available.
// The size of received response-headers is computed only from headers that
// are really received from the server. Additional headers appended by the
// browser are not included in this number, but they appear in the list of
// header objects.
HeadersSize int `json:"headersSize"`
// Size of the received response body in bytes. Set to zero in case of
// responses coming from the cache (304). Set to -1 if the info is not
// available.
BodySize int `json:"bodySize"`
// optional (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// Cookie contains list of all cookies (used in <request> and <response> objects).
type Cookie struct {
// The name of the cookie.
Name string `json:"name"`
// The cookie value.
Value string `json:"value"`
// optional The path pertaining to the cookie.
Path string `json:"path,omitempty"`
// optional The host of the cookie.
Domain string `json:"domain,omitempty"`
// optional Cookie expiration time.
// (ISO 8601 YYYY-MM-DDThh:mm:ss.sTZD, e.g. 2009-07-24T19:20:30.123+02:00).
Expires string `json:"expires,omitempty"`
// optional Set to true if the cookie is HTTP only, false otherwise.
HTTPOnly bool `json:"httpOnly,omitempty"`
// optional (new in 1.2) True if the cookie was transmitted over ssl, false
// otherwise.
Secure bool `json:"secure,omitempty"`
// optional (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// NVP is simply a name/value pair with a comment
type NVP struct {
Name string `json:"name"`
Value string `json:"value"`
Comment string `json:"comment,omitempty"`
}
// PostData describes posted data, if any (embedded in <request> object).
type PostData struct {
// Mime type of posted data.
MimeType string `json:"mimeType"`
// List of posted parameters (in case of URL encoded parameters).
Params []PostParam `json:"params"`
// Plain text posted data
Text string `json:"text"`
// optional (new in 1.2) A comment provided by the user or the
// application.
Comment string `json:"comment,omitempty"`
}
func (d PostData) B64Decoded() (bool, []byte, string) {
// there is a weird gap in HAR spec 1.2, that does not define encoding for binary POST bodies
// we have own convention of putting `base64` into comment field to handle it similar to response `Content`
return b64Decoded(d.Comment, d.Text)
}
// PostParam is a list of posted parameters, if any (embedded in <postData> object).
type PostParam struct {
// name of a posted parameter.
Name string `json:"name"`
// optional value of a posted parameter or content of a posted file.
Value string `json:"value,omitempty"`
// optional name of a posted file.
FileName string `json:"fileName,omitempty"`
// optional content type of a posted file.
ContentType string `json:"contentType,omitempty"`
// optional (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// Content describes details about response content (embedded in <response> object).
type Content struct {
// Length of the returned content in bytes. Should be equal to
// response.bodySize if there is no compression and bigger when the content
// has been compressed.
Size int `json:"size"`
// optional Number of bytes saved. Leave out this field if the information
// is not available.
Compression int `json:"compression,omitempty"`
// MIME type of the response text (value of the Content-Type response
// header). The charset attribute of the MIME type is included (if
// available).
MimeType string `json:"mimeType"`
// optional Response body sent from the server or loaded from the browser
// cache. This field is populated with textual content only. The text field
// is either HTTP decoded text or a encoded (e.g. "base64") representation of
// the response body. Leave out this field if the information is not
// available.
Text string `json:"text,omitempty"`
// optional (new in 1.2) Encoding used for response text field e.g
// "base64". Leave out this field if the text field is HTTP decoded
// (decompressed & unchunked), than trans-coded from its original character
// set into UTF-8.
Encoding string `json:"encoding,omitempty"`
// optional (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
func (c Content) B64Decoded() (bool, []byte, string) {
return b64Decoded(c.Encoding, c.Text)
}
func b64Decoded(enc string, text string) (isBinary bool, asBytes []byte, asString string) {
if enc == "base64" {
decoded, err := base64.StdEncoding.DecodeString(text)
if err != nil {
logger.Log.Warningf("Failed to decode content as base64: %s", text)
return false, []byte(text), text
}
valid := utf8.Valid(decoded)
return !valid, decoded, string(decoded)
} else {
return false, nil, text
}
}
// Cache contains info about a request coming from browser cache.
type Cache struct {
// optional State of a cache entry before the request. Leave out this field
// if the information is not available.
BeforeRequest CacheObject `json:"beforeRequest,omitempty"`
// optional State of a cache entry after the request. Leave out this field if
// the information is not available.
AfterRequest CacheObject `json:"afterRequest,omitempty"`
// optional (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// CacheObject is used by both beforeRequest and afterRequest
type CacheObject struct {
// optional - Expiration time of the cache entry.
Expires string `json:"expires,omitempty"`
// The last time the cache entry was opened.
LastAccess string `json:"lastAccess"`
// Etag
ETag string `json:"eTag"`
// The number of times the cache entry has been opened.
HitCount int `json:"hitCount"`
// optional (new in 1.2) A comment provided by the user or the application.
Comment string `json:"comment,omitempty"`
}
// PageTimings describes various phases within request-response round trip.
// All times are specified in milliseconds.
type PageTimings struct {
Blocked int `json:"blocked,omitempty"`
// optional - Time spent in a queue waiting for a network connection. Use -1
// if the timing does not apply to the current request.
DNS int `json:"dns,omitempty"`
// optional - DNS resolution time. The time required to resolve a host name.
// Use -1 if the timing does not apply to the current request.
Connect int `json:"connect,omitempty"`
// optional - Time required to create TCP connection. Use -1 if the timing
// does not apply to the current request.
Send int `json:"send"`
// Time required to send HTTP request to the server.
Wait int `json:"wait"`
// Waiting for a response from the server.
Receive int `json:"receive"`
// Time required to read entire response from the server (or cache).
Ssl int `json:"ssl,omitempty"`
// optional (new in 1.2) - Time required for SSL/TLS negotiation. If this
// field is defined then the time is also included in the connect field (to
// ensure backward compatibility with HAR 1.1). Use -1 if the timing does not
// apply to the current request.
Comment string `json:"comment,omitempty"`
// optional (new in 1.2) - A comment provided by the user or the application.
}
// TestResult contains results for an individual HTTP request
type TestResult struct {
URL string `json:"url"`
Status int `json:"status"` // 200, 500, etc.
StartTime time.Time `json:"startTime"`
EndTime time.Time `json:"endTime"`
Latency int `json:"latency"` // milliseconds
Method string `json:"method"`
HarFile string `json:"harfile"`
}
// aliases for martian lib compatibility
type Header = NVP
type QueryString = NVP
type Param = PostParam
type Timings = PageTimings

View File

@@ -0,0 +1,38 @@
package har
import "testing"
func TestContentEncoded(t *testing.T) {
testCases := []struct {
text string
isBinary bool
expectedStr string
binaryLen int
}{
{"not-base64", false, "not-base64", 10},
{"dGVzdA==", false, "test", 4},
{"test", true, "\f@A", 3}, // valid UTF-8 with some non-printable chars
{"IsDggPCAgPiAgID8gICAgN/vv/e/v/u/v7/9v7+/vyIKIu+3kO+3ke+3ku+3k++3lO+3le+3lu+3l++3mO+3me+3mu+3m++3nO+3ne+3nu+3n++3oO+3oe+3ou+3o++3pO+3pe+3pu+3p++3qO+3qe+3qu+3q++3rO+3re+3ru+3ryIK", true, "test", 132}, // invalid UTF-8 (thus binary), taken from https://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-test.txt
}
for _, tc := range testCases {
c := Content{
Encoding: "base64",
Text: tc.text,
}
isBinary, asBytes, asString := c.B64Decoded()
_ = asBytes
if tc.isBinary != isBinary {
t.Errorf("Binary flag mismatch: %t != %t", tc.isBinary, isBinary)
}
if !isBinary && tc.expectedStr != asString {
t.Errorf("Decode value mismatch: %s != %s", tc.expectedStr, asString)
}
if tc.binaryLen != len(asBytes) {
t.Errorf("Binary len mismatch: %d != %d", tc.binaryLen, len(asBytes))
}
}
}

View File

@@ -1,4 +1,4 @@
package utils
package har
import (
"bytes"
@@ -8,7 +8,6 @@ import (
"strings"
"time"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared/logger"
)
@@ -55,14 +54,14 @@ import (
// return cookies
//}
func BuildHeaders(rawHeaders []interface{}) ([]har.Header, string, string, string, string, string) {
func BuildHeaders(rawHeaders []interface{}) ([]Header, string, string, string, string, string) {
var host, scheme, authority, path, status string
headers := make([]har.Header, 0, len(rawHeaders))
headers := make([]Header, 0, len(rawHeaders))
for _, header := range rawHeaders {
h := header.(map[string]interface{})
headers = append(headers, har.Header{
headers = append(headers, Header{
Name: h["name"].(string),
Value: h["value"].(string),
})
@@ -87,8 +86,8 @@ func BuildHeaders(rawHeaders []interface{}) ([]har.Header, string, string, strin
return headers, host, scheme, authority, path, status
}
func BuildPostParams(rawParams []interface{}) []har.Param {
params := make([]har.Param, 0, len(rawParams))
func BuildPostParams(rawParams []interface{}) []Param {
params := make([]Param, 0, len(rawParams))
for _, param := range rawParams {
p := param.(map[string]interface{})
name := ""
@@ -108,10 +107,10 @@ func BuildPostParams(rawParams []interface{}) []har.Param {
contentType = p["contentType"].(string)
}
params = append(params, har.Param{
params = append(params, Param{
Name: name,
Value: value,
Filename: fileName,
FileName: fileName,
ContentType: contentType,
})
}
@@ -119,25 +118,25 @@ func BuildPostParams(rawParams []interface{}) []har.Param {
return params
}
func NewRequest(request map[string]interface{}) (harRequest *har.Request, err error) {
func NewRequest(request map[string]interface{}) (harRequest *Request, err error) {
headers, host, scheme, authority, path, _ := BuildHeaders(request["_headers"].([]interface{}))
cookies := make([]har.Cookie, 0) // BuildCookies(request["_cookies"].([]interface{}))
cookies := make([]Cookie, 0) // BuildCookies(request["_cookies"].([]interface{}))
postData, _ := request["postData"].(map[string]interface{})
mimeType, _ := postData["mimeType"]
mimeType := postData["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
text, _ := postData["text"]
text := postData["text"]
postDataText := ""
if text != nil {
postDataText = text.(string)
}
queryString := make([]har.QueryString, 0)
queryString := make([]QueryString, 0)
for _, _qs := range request["_queryString"].([]interface{}) {
qs := _qs.(map[string]interface{})
queryString = append(queryString, har.QueryString{
queryString = append(queryString, QueryString{
Name: qs["name"].(string),
Value: qs["value"].(string),
})
@@ -148,21 +147,21 @@ func NewRequest(request map[string]interface{}) (harRequest *har.Request, err er
url = fmt.Sprintf("%s://%s%s", scheme, authority, path)
}
harParams := make([]har.Param, 0)
harParams := make([]Param, 0)
if postData["params"] != nil {
harParams = BuildPostParams(postData["params"].([]interface{}))
}
harRequest = &har.Request{
harRequest = &Request{
Method: request["method"].(string),
URL: url,
HTTPVersion: request["httpVersion"].(string),
HeadersSize: -1,
BodySize: int64(bytes.NewBufferString(postDataText).Len()),
HeaderSize: -1,
BodySize: bytes.NewBufferString(postDataText).Len(),
QueryString: queryString,
Headers: headers,
Cookies: cookies,
PostData: &har.PostData{
PostData: PostData{
MimeType: mimeType.(string),
Params: harParams,
Text: postDataText,
@@ -172,27 +171,27 @@ func NewRequest(request map[string]interface{}) (harRequest *har.Request, err er
return
}
func NewResponse(response map[string]interface{}) (harResponse *har.Response, err error) {
func NewResponse(response map[string]interface{}) (harResponse *Response, err error) {
headers, _, _, _, _, _status := BuildHeaders(response["_headers"].([]interface{}))
cookies := make([]har.Cookie, 0) // BuildCookies(response["_cookies"].([]interface{}))
cookies := make([]Cookie, 0) // BuildCookies(response["_cookies"].([]interface{}))
content, _ := response["content"].(map[string]interface{})
mimeType, _ := content["mimeType"]
mimeType := content["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
encoding, _ := content["encoding"]
text, _ := content["text"]
encoding := content["encoding"]
text := content["text"]
bodyText := ""
if text != nil {
bodyText = text.(string)
}
harContent := &har.Content{
harContent := &Content{
Encoding: encoding.(string),
MimeType: mimeType.(string),
Text: []byte(bodyText),
Size: int64(len(bodyText)),
Text: bodyText,
Size: len(bodyText),
}
status := int(response["status"].(float64))
@@ -206,20 +205,20 @@ func NewResponse(response map[string]interface{}) (harResponse *har.Response, er
}
}
harResponse = &har.Response{
harResponse = &Response{
HTTPVersion: response["httpVersion"].(string),
Status: status,
StatusText: response["statusText"].(string),
HeadersSize: -1,
BodySize: int64(bytes.NewBufferString(bodyText).Len()),
BodySize: bytes.NewBufferString(bodyText).Len(),
Headers: headers,
Cookies: cookies,
Content: harContent,
Content: *harContent,
}
return
}
func NewEntry(request map[string]interface{}, response map[string]interface{}, startTime time.Time, elapsedTime int64) (*har.Entry, error) {
func NewEntry(request map[string]interface{}, response map[string]interface{}, startTime time.Time, elapsedTime int64) (*Entry, error) {
harRequest, err := NewRequest(request)
if err != nil {
logger.Log.Errorf("Failed converting request to HAR %s (%v,%+v)", err, err, err)
@@ -236,16 +235,16 @@ func NewEntry(request map[string]interface{}, response map[string]interface{}, s
elapsedTime = 1
}
harEntry := har.Entry{
StartedDateTime: startTime,
Time: elapsedTime,
Request: harRequest,
Response: harResponse,
Cache: &har.Cache{},
Timings: &har.Timings{
harEntry := Entry{
StartedDateTime: startTime.Format(time.RFC3339),
Time: int(elapsedTime),
Request: *harRequest,
Response: *harResponse,
Cache: Cache{},
PageTimings: PageTimings{
Send: -1,
Wait: -1,
Receive: elapsedTime,
Receive: int(elapsedTime),
},
}

View File

@@ -1,6 +1,6 @@
package holder
import "mizuserver/pkg/resolver"
import "github.com/up9inc/mizu/agent/pkg/resolver"
var k8sResolver *resolver.Resolver
@@ -11,4 +11,3 @@ func SetResolver(param *resolver.Resolver) {
func GetResolver() *resolver.Resolver {
return k8sResolver
}

View File

@@ -1,8 +1,8 @@
package middlewares
import (
"mizuserver/pkg/config"
"mizuserver/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/config"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/gin-gonic/gin"
ory "github.com/ory/kratos-client-go"

View File

@@ -2,11 +2,11 @@ package models
import (
"encoding/json"
"mizuserver/pkg/rules"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/rules"
tapApi "github.com/up9inc/mizu/tap/api"
"github.com/google/martian/har"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"

View File

@@ -4,8 +4,6 @@ import (
"bufio"
"encoding/json"
"errors"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared/logger"
"io"
"io/ioutil"
"os"
@@ -13,10 +11,14 @@ import (
"sort"
"strings"
"testing"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/shared/logger"
)
func getFiles(baseDir string) (result []string, err error) {
result = make([]string, 0, 0)
result = make([]string, 0)
logger.Log.Infof("Reading files from tree: %s", baseDir)
// https://yourbasic.org/golang/list-files-in-directory/
@@ -51,32 +53,42 @@ func fileSize(fname string) int64 {
return fi.Size()
}
func feedEntries(fromFiles []string) (err error) {
func feedEntries(fromFiles []string, isSync bool) (count int, err error) {
badFiles := make([]string, 0)
cnt := 0
for _, file := range fromFiles {
logger.Log.Info("Processing file: " + file)
ext := strings.ToLower(filepath.Ext(file))
eCnt := 0
switch ext {
case ".har":
err = feedFromHAR(file)
eCnt, err = feedFromHAR(file, isSync)
if err != nil {
logger.Log.Warning("Failed processing file: " + err.Error())
badFiles = append(badFiles, file)
continue
}
case ".ldjson":
err = feedFromLDJSON(file)
eCnt, err = feedFromLDJSON(file, isSync)
if err != nil {
logger.Log.Warning("Failed processing file: " + err.Error())
badFiles = append(badFiles, file)
continue
}
default:
return errors.New("Unsupported file extension: " + ext)
return 0, errors.New("Unsupported file extension: " + ext)
}
cnt += eCnt
}
return nil
for _, f := range badFiles {
logger.Log.Infof("Bad file: %s", f)
}
return cnt, nil
}
func feedFromHAR(file string) error {
func feedFromHAR(file string, isSync bool) (int, error) {
fd, err := os.Open(file)
if err != nil {
panic(err)
@@ -86,23 +98,41 @@ func feedFromHAR(file string) error {
data, err := ioutil.ReadAll(fd)
if err != nil {
return err
return 0, err
}
var harDoc har.HAR
err = json.Unmarshal(data, &harDoc)
if err != nil {
return err
return 0, err
}
cnt := 0
for _, entry := range harDoc.Log.Entries {
GetOasGeneratorInstance().PushEntry(entry)
cnt += 1
feedEntry(&entry, isSync)
}
return nil
return cnt, nil
}
func feedFromLDJSON(file string) error {
func feedEntry(entry *har.Entry, isSync bool) {
if entry.Response.Status == 302 {
logger.Log.Debugf("Dropped traffic entry due to permanent redirect status: %s", entry.StartedDateTime)
}
if strings.Contains(entry.Request.URL, "taboola") {
logger.Log.Debugf("Interesting: %s", entry.Request.URL)
}
if isSync {
GetOasGeneratorInstance().entriesChan <- *entry // blocking variant, right?
} else {
GetOasGeneratorInstance().PushEntry(entry)
}
}
func feedFromLDJSON(file string, isSync bool) (int, error) {
fd, err := os.Open(file)
if err != nil {
panic(err)
@@ -113,8 +143,8 @@ func feedFromLDJSON(file string) error {
reader := bufio.NewReader(fd)
var meta map[string]interface{}
buf := strings.Builder{}
cnt := 0
for {
substr, isPrefix, err := reader.ReadLine()
if err == io.EOF {
@@ -132,26 +162,28 @@ func feedFromLDJSON(file string) error {
if meta == nil {
err := json.Unmarshal([]byte(line), &meta)
if err != nil {
return err
return 0, err
}
} else {
var entry har.Entry
err := json.Unmarshal([]byte(line), &entry)
if err != nil {
logger.Log.Warningf("Failed decoding entry: %s", line)
} else {
cnt += 1
feedEntry(&entry, isSync)
}
GetOasGeneratorInstance().PushEntry(&entry)
}
}
return nil
return cnt, nil
}
func TestFilesList(t *testing.T) {
res, err := getFiles("./test_artifacts/")
t.Log(len(res))
t.Log(res)
if err != nil || len(res) != 2 {
if err != nil || len(res) != 3 {
t.Logf("Should return 2 files but returned %d", len(res))
t.FailNow()
}

View File

@@ -12,6 +12,7 @@ var (
patEmail = regexp.MustCompile(`^\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*$`)
patHexLower = regexp.MustCompile(`(0x)?[0-9a-f]{6,}`)
patHexUpper = regexp.MustCompile(`(0x)?[0-9A-F]{6,}`)
patLongNum = regexp.MustCompile(`\d{6,}`)
)
func IsGibberish(str string) bool {
@@ -27,16 +28,12 @@ func IsGibberish(str string) bool {
return true
}
if patHexLower.MatchString(str) || patHexUpper.MatchString(str) {
if patHexLower.MatchString(str) || patHexUpper.MatchString(str) || patLongNum.MatchString(str) {
return true
}
noise := noiseLevel(str)
if noise >= 0.2 {
return true
}
return false
return noise >= 0.2
}
func noiseLevel(str string) (score float64) {

View File

@@ -17,17 +17,19 @@ var ignoredHeaders = []string{
"x-att-deviceid", "x-correlation-id", "correlation-id", "x-client-data",
"x-http-method-override", "x-real-ip", "x-request-id", "x-request-start", "x-requested-with", "x-uidh",
"x-same-domain", "x-content-type-options", "x-frame-options", "x-xss-protection",
"x-wap-profile", "x-scheme",
"newrelic", "x-cloud-trace-context", "sentry-trace",
"expires", "set-cookie", "p3p", "location", "content-security-policy", "content-security-policy-report-only",
"last-modified", "content-language",
"x-wap-profile", "x-scheme", "status", "x-cache", "x-application-context", "retry-after",
"newrelic", "x-cloud-trace-context", "sentry-trace", "x-cache-hits", "x-served-by", "x-span-name",
"expires", "set-cookie", "p3p", "content-security-policy", "content-security-policy-report-only",
"last-modified", "content-language", "x-varnish", "true-client-ip", "akamai-origin-hop",
"keep-alive", "etag", "alt-svc", "x-csrf-token", "x-ua-compatible", "vary", "x-powered-by",
"age", "allow", "www-authenticate",
"age", "allow", "www-authenticate", "expect-ct", "timing-allow-origin", "referrer-policy",
"x-aspnet-version", "x-aspnetmvc-version", "x-timer", "x-abuse-info", "x-mod-pagespeed",
"duration_ms", // UP9 custom
}
var ignoredHeaderPrefixes = []string{
":", "accept-", "access-control-", "if-", "sec-", "grpc-",
"x-forwarded-", "x-original-",
"x-forwarded-", "x-original-", "cf-",
"x-up9-", "x-envoy-", "x-hasura-", "x-b3-", "x-datadog-", "x-envoy-", "x-amz-", "x-newrelic-", "x-prometheus-",
"x-akamai-", "x-spotim-", "x-amzn-", "x-ratelimit-",
}

View File

@@ -3,10 +3,11 @@ package oas
import (
"context"
"encoding/json"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared/logger"
"net/url"
"sync"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/shared/logger"
)
var (
@@ -17,7 +18,7 @@ var (
func GetOasGeneratorInstance() *oasGenerator {
syncOnce.Do(func() {
instance = newOasGenerator()
logger.Log.Debug("Oas Generator Initialized")
logger.Log.Debug("OAS Generator Initialized")
})
return instance
}

View File

@@ -1,18 +1,19 @@
package oas
import (
"encoding/base64"
"encoding/json"
"errors"
"github.com/chanced/openapi"
"github.com/google/martian/har"
"github.com/google/uuid"
"github.com/up9inc/mizu/shared/logger"
"mime"
"net/url"
"strconv"
"strings"
"sync"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/chanced/openapi"
"github.com/google/uuid"
"github.com/up9inc/mizu/shared/logger"
)
type reqResp struct { // hello, generics in Go
@@ -31,7 +32,7 @@ func NewGen(server string) *SpecGen {
spec.Version = "3.1.0"
info := openapi.Info{Title: server}
info.Version = "0.0"
info.Version = "1.0"
spec.Info = &info
spec.Paths = &openapi.Paths{Items: map[openapi.PathValue]*openapi.PathObj{}}
@@ -176,11 +177,18 @@ func (g *SpecGen) handlePathObj(entry *har.Entry) (string, error) {
if isExtIgnored(urlParsed.Path) {
logger.Log.Debugf("Dropped traffic entry due to ignored extension: %s", urlParsed.Path)
return "", nil
}
ctype := getRespCtype(entry.Response)
if entry.Request.Method == "OPTIONS" {
logger.Log.Debugf("Dropped traffic entry due to its method: %s", urlParsed.Path)
return "", nil
}
ctype := getRespCtype(&entry.Response)
if isCtypeIgnored(ctype) {
logger.Log.Debugf("Dropped traffic entry due to ignored response ctype: %s", ctype)
return "", nil
}
if entry.Response.Status < 100 {
@@ -193,9 +201,19 @@ func (g *SpecGen) handlePathObj(entry *har.Entry) (string, error) {
return "", nil
}
split := strings.Split(urlParsed.Path, "/")
if entry.Response.Status == 502 || entry.Response.Status == 503 || entry.Response.Status == 504 {
logger.Log.Debugf("Dropped traffic entry due to temporary server error: %s", entry.StartedDateTime)
return "", nil
}
var split []string
if urlParsed.RawPath != "" {
split = strings.Split(urlParsed.RawPath, "/")
} else {
split = strings.Split(urlParsed.Path, "/")
}
node := g.tree.getOrSet(split, new(openapi.PathObj))
opObj, err := handleOpObj(entry, node.ops)
opObj, err := handleOpObj(entry, node.pathObj)
if opObj != nil {
return opObj.OperationID, err
@@ -216,12 +234,12 @@ func handleOpObj(entry *har.Entry, pathObj *openapi.PathObj) (*openapi.Operation
return nil, nil
}
err = handleRequest(entry.Request, opObj, isSuccess)
err = handleRequest(&entry.Request, opObj, isSuccess)
if err != nil {
return nil, err
}
err = handleResponse(entry.Response, opObj, isSuccess)
err = handleResponse(&entry.Response, opObj, isSuccess)
if err != nil {
return nil, err
}
@@ -233,26 +251,22 @@ func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool) e
// TODO: we don't handle the situation when header/qstr param can be defined on pathObj level. Also the path param defined on opObj
qstrGW := nvParams{
In: openapi.InQuery,
Pairs: func() []NVPair {
return qstrToNVP(req.QueryString)
},
In: openapi.InQuery,
Pairs: req.QueryString,
IsIgnored: func(name string) bool { return false },
GeneralizeName: func(name string) string { return name },
}
handleNameVals(qstrGW, &opObj.Parameters)
hdrGW := nvParams{
In: openapi.InHeader,
Pairs: func() []NVPair {
return hdrToNVP(req.Headers)
},
In: openapi.InHeader,
Pairs: req.Headers,
IsIgnored: isHeaderIgnored,
GeneralizeName: strings.ToLower,
}
handleNameVals(hdrGW, &opObj.Parameters)
if req.PostData != nil && req.PostData.Text != "" && isSuccess {
if req.PostData.Text != "" && isSuccess {
reqBody, err := getRequestBody(req, opObj, isSuccess)
if err != nil {
return err
@@ -260,7 +274,7 @@ func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool) e
if reqBody != nil {
reqCtype := getReqCtype(req)
reqMedia, err := fillContent(reqResp{Req: req}, reqBody.Content, reqCtype, err)
reqMedia, err := fillContent(reqResp{Req: req}, reqBody.Content, reqCtype)
if err != nil {
return err
}
@@ -282,7 +296,7 @@ func handleResponse(resp *har.Response, opObj *openapi.Operation, isSuccess bool
respCtype := getRespCtype(resp)
respContent := respObj.Content
respMedia, err := fillContent(reqResp{Resp: resp}, respContent, respCtype, err)
respMedia, err := fillContent(reqResp{Resp: resp}, respContent, respCtype)
if err != nil {
return err
}
@@ -330,11 +344,9 @@ func handleRespHeaders(reqHeaders []har.Header, respObj *openapi.ResponseObj) {
}
}
}
return
}
func fillContent(reqResp reqResp, respContent openapi.Content, ctype string, err error) (*openapi.MediaType, error) {
func fillContent(reqResp reqResp, respContent openapi.Content, ctype string) (*openapi.MediaType, error) {
content, found := respContent[ctype]
if !found {
respContent[ctype] = &openapi.MediaType{}
@@ -342,43 +354,36 @@ func fillContent(reqResp reqResp, respContent openapi.Content, ctype string, err
}
var text string
var isBinary bool
if reqResp.Req != nil {
text = reqResp.Req.PostData.Text
isBinary, _, text = reqResp.Req.PostData.B64Decoded()
} else {
text = decRespText(reqResp.Resp.Content)
isBinary, _, text = reqResp.Resp.Content.B64Decoded()
}
var exampleMsg []byte
// try treating it as json
any, isJSON := anyJSON(text)
if isJSON {
// re-marshal with forced indent
exampleMsg, err = json.MarshalIndent(any, "", "\t")
if err != nil {
panic("Failed to re-marshal value, super-strange")
}
} else {
exampleMsg, err = json.Marshal(text)
if err != nil {
return nil, err
}
}
content.Example = exampleMsg
return respContent[ctype], nil
}
func decRespText(content *har.Content) (res string) {
res = string(content.Text)
if content.Encoding == "base64" {
data, err := base64.StdEncoding.DecodeString(res)
if err != nil {
logger.Log.Warningf("error decoding response text as base64: %s", err)
if !isBinary && text != "" {
var exampleMsg []byte
// try treating it as json
any, isJSON := anyJSON(text)
if isJSON {
// re-marshal with forced indent
if msg, err := json.MarshalIndent(any, "", "\t"); err != nil {
panic("Failed to re-marshal value, super-strange")
} else {
exampleMsg = msg
}
} else {
res = string(data)
if msg, err := json.Marshal(text); err != nil {
return nil, err
} else {
exampleMsg = msg
}
}
content.Example = exampleMsg
}
return
return respContent[ctype], nil
}
func getRespCtype(resp *har.Response) string {

View File

@@ -2,49 +2,58 @@ package oas
import (
"encoding/json"
"github.com/chanced/openapi"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared/logger"
"io/ioutil"
"os"
"strings"
"testing"
"time"
"github.com/chanced/openapi"
"github.com/op/go-logging"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/shared/logger"
)
// if started via env, write file into subdir
func writeFiles(label string, spec *openapi.OpenAPI) {
func outputSpec(label string, spec *openapi.OpenAPI, t *testing.T) {
content, err := json.MarshalIndent(spec, "", "\t")
if err != nil {
panic(err)
}
if os.Getenv("MIZU_OAS_WRITE_FILES") != "" {
path := "./oas-samples"
err := os.MkdirAll(path, 0o755)
if err != nil {
panic(err)
}
content, err := json.MarshalIndent(spec, "", "\t")
if err != nil {
panic(err)
}
err = ioutil.WriteFile(path+"/"+label+".json", content, 0644)
if err != nil {
panic(err)
}
t.Logf("Written: %s", label)
} else {
t.Logf("%s", string(content))
}
}
func TestEntries(t *testing.T) {
logger.InitLoggerStderrOnly(logging.INFO)
files, err := getFiles("./test_artifacts/")
// files, err = getFiles("/media/bigdisk/UP9")
if err != nil {
t.Log(err)
t.FailNow()
}
GetOasGeneratorInstance().Start()
loadStartingOAS()
if err := feedEntries(files); err != nil {
cnt, err := feedEntries(files, true)
if err != nil {
t.Log(err)
t.Fail()
}
loadStartingOAS()
waitQueueProcessed()
svcs := strings.Builder{}
GetOasGeneratorInstance().ServiceSpecs.Range(func(key, val interface{}) bool {
@@ -78,33 +87,35 @@ func TestEntries(t *testing.T) {
t.FailNow()
}
specText, _ := json.MarshalIndent(spec, "", "\t")
t.Logf("%s", string(specText))
outputSpec(svc, spec, t)
err = spec.Validate()
if err != nil {
t.Log(err)
t.FailNow()
}
writeFiles(svc, spec)
return true
})
logger.Log.Infof("Total entries: %d", cnt)
}
func TestFileLDJSON(t *testing.T) {
func TestFileSingle(t *testing.T) {
GetOasGeneratorInstance().Start()
file := "test_artifacts/output_rdwtyeoyrj.har.ldjson"
err := feedFromLDJSON(file)
// loadStartingOAS()
file := "test_artifacts/params.har"
files := []string{file}
cnt, err := feedEntries(files, true)
if err != nil {
logger.Log.Warning("Failed processing file: " + err.Error())
t.Fail()
}
loadStartingOAS()
waitQueueProcessed()
GetOasGeneratorInstance().ServiceSpecs.Range(func(_, val interface{}) bool {
GetOasGeneratorInstance().ServiceSpecs.Range(func(key, val interface{}) bool {
svc := key.(string)
gen := val.(*SpecGen)
spec, err := gen.GetSpec()
if err != nil {
@@ -112,8 +123,7 @@ func TestFileLDJSON(t *testing.T) {
t.FailNow()
}
specText, _ := json.MarshalIndent(spec, "", "\t")
t.Logf("%s", string(specText))
outputSpec(svc, spec, t)
err = spec.Validate()
if err != nil {
@@ -123,6 +133,19 @@ func TestFileLDJSON(t *testing.T) {
return true
})
logger.Log.Infof("Processed entries: %d", cnt)
}
func waitQueueProcessed() {
for {
time.Sleep(100 * time.Millisecond)
queue := len(GetOasGeneratorInstance().entriesChan)
logger.Log.Infof("Queue: %d", queue)
if queue < 1 {
break
}
}
}
func loadStartingOAS() {
@@ -149,26 +172,33 @@ func loadStartingOAS() {
gen.StartFromSpec(doc)
GetOasGeneratorInstance().ServiceSpecs.Store("catalogue", gen)
return
}
func TestEntriesNegative(t *testing.T) {
files := []string{"invalid"}
err := feedEntries(files)
_, err := feedEntries(files, false)
if err == nil {
t.Logf("Should have failed")
t.Fail()
}
}
func TestEntriesPositive(t *testing.T) {
files := []string{"test_artifacts/params.har"}
_, err := feedEntries(files, false)
if err != nil {
t.Logf("Failed")
t.Fail()
}
}
func TestLoadValidHAR(t *testing.T) {
inp := `{"startedDateTime": "2021-02-03T07:48:12.959000+00:00", "time": 1, "request": {"method": "GET", "url": "http://unresolved_target/1.0.0/health", "httpVersion": "HTTP/1.1", "cookies": [], "headers": [], "queryString": [], "headersSize": -1, "bodySize": -1}, "response": {"status": 200, "statusText": "OK", "httpVersion": "HTTP/1.1", "cookies": [], "headers": [], "content": {"size": 2, "mimeType": "", "text": "OK"}, "redirectURL": "", "headersSize": -1, "bodySize": 2}, "cache": {}, "timings": {"send": -1, "wait": -1, "receive": 1}}`
var entry *har.Entry
var err = json.Unmarshal([]byte(inp), &entry)
if err != nil {
t.Logf("Failed to decode entry: %s", err)
// t.FailNow() demonstrates the problem of library
t.FailNow() // demonstrates the problem of `martian` HAR library
}
}
@@ -193,5 +223,4 @@ func TestLoadValid3_1(t *testing.T) {
t.Log(err)
t.FailNow()
}
return
}

View File

@@ -13,10 +13,14 @@
"time": 1022,
"request": {
"method": "GET",
"url": "http://trcc-api-service/proxies/",
"url": "http://trcc-api-service/proxies;matrixparam=example/",
"httpVersion": "",
"cookies": [],
"headers": [
{
"name": "X-Custom-Demo-Header",
"value": "demo"
},
{
"name": "Sec-Fetch-Dest",
"value": "empty"
@@ -124,6 +128,10 @@
"httpVersion": "",
"cookies": [],
"headers": [
{
"name": "X-Custom-Demo-Header2",
"value": "demo2"
},
{
"name": "Vary",
"value": "Origin"
@@ -24568,7 +24576,7 @@
"bodySize": -1
},
"response": {
"status": 200,
"status": 308,
"statusText": "OK",
"httpVersion": "",
"cookies": [],
@@ -24635,7 +24643,7 @@
"time": 1,
"request": {
"method": "GET",
"url": "http://trcc-api-service/models/aws_kong5/suites/all/runs",
"url": "http://trcc-api-service/models/aws_kong5/suites/all/runs.png",
"httpVersion": "",
"cookies": [],
"headers": [
@@ -24894,7 +24902,7 @@
"bodySize": -1
},
"response": {
"status": 200,
"status": 0,
"statusText": "OK",
"httpVersion": "",
"cookies": [],

View File

@@ -0,0 +1,127 @@
{
"log": {
"version": "1.2",
"creator": {
"name": "mitmproxy har_dump",
"version": "0.1",
"comment": "mitmproxy version mitmproxy 4.0.4"
},
"entries": [
{
"startedDateTime": "2019-09-06T06:14:43.864529+00:00",
"time": 111,
"request": {
"method": "GET",
"url": "https://httpbin.org/e21f7112-3d3b-4632-9da3-a4af2e0e9166/sub1",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [],
"headersSize": 1542,
"bodySize": 0,
"queryString": []
},
"response": {
"status": 200,
"statusText": "OK",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [
],
"content": {
"mimeType": "text/html",
"text": "",
"size": 0
},
"redirectURL": "",
"headersSize": 245,
"bodySize": 39
},
"cache": {},
"timings": {
"send": 22,
"receive": 2,
"wait": 87,
"connect": -1,
"ssl": -1
},
"serverIPAddress": "54.210.29.33"
},
{
"startedDateTime": "2019-09-06T06:16:18.747122+00:00",
"time": 630,
"request": {
"method": "GET",
"url": "https://httpbin.org/952bea17-3776-11ea-9341-42010a84012a/sub2",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [],
"queryString": [],
"headersSize": 1542,
"bodySize": 0
},
"response": {
"status": 200,
"statusText": "OK",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [],
"content": {
"size": 39,
"compression": -20,
"mimeType": "application/json",
"text": "null"
},
"redirectURL": "",
"headersSize": 248,
"bodySize": 39
},
"cache": {},
"timings": {
"send": 14,
"receive": 4,
"wait": 350,
"connect": 262,
"ssl": -1
}
},
{
"startedDateTime": "2019-09-06T06:16:19.747122+00:00",
"time": 630,
"request": {
"method": "GET",
"url": "https://httpbin.org/952bea17-3776-11ea-9341-42010a84012a;mparam=matrixparam",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [],
"queryString": [],
"headersSize": 1542,
"bodySize": 0
},
"response": {
"status": 200,
"statusText": "OK",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [],
"content": {
"size": 39,
"compression": -20,
"mimeType": "application/json",
"text": "null"
},
"redirectURL": "",
"headersSize": 248,
"bodySize": 39
},
"cache": {},
"timings": {
"send": 14,
"receive": 4,
"wait": 350,
"connect": 262,
"ssl": -1
}
}
]
}
}

View File

@@ -1,34 +1,48 @@
package oas
import (
"github.com/chanced/openapi"
"github.com/up9inc/mizu/shared/logger"
"net/url"
"strconv"
"strings"
"github.com/chanced/openapi"
"github.com/up9inc/mizu/shared/logger"
)
type NodePath = []string
type Node struct {
constant *string
param *openapi.ParameterObj
ops *openapi.PathObj
parent *Node
children []*Node
constant *string
pathParam *openapi.ParameterObj
pathObj *openapi.PathObj
parent *Node
children []*Node
}
func (n *Node) getOrSet(path NodePath, pathObjToSet *openapi.PathObj) (node *Node) {
if pathObjToSet == nil {
func (n *Node) getOrSet(path NodePath, existingPathObj *openapi.PathObj) (node *Node) {
if existingPathObj == nil {
panic("Invalid function call")
}
pathChunk := path[0]
potentialMatrix := strings.SplitN(pathChunk, ";", 2)
if len(potentialMatrix) > 1 {
pathChunk = potentialMatrix[0]
logger.Log.Warningf("URI matrix params are not supported: %s", potentialMatrix[1])
}
chunkIsParam := strings.HasPrefix(pathChunk, "{") && strings.HasSuffix(pathChunk, "}")
pathChunk, err := url.PathUnescape(pathChunk)
if err != nil {
logger.Log.Warningf("URI segment is not correctly encoded: %s", pathChunk)
// any side effects on continuing?
}
chunkIsGibberish := IsGibberish(pathChunk) && !IsVersionString(pathChunk)
var paramObj *openapi.ParameterObj
if chunkIsParam && pathObjToSet != nil {
paramObj = findParamByName(pathObjToSet.Parameters, openapi.InPath, pathChunk[1:len(pathChunk)-1])
if chunkIsParam && existingPathObj != nil && existingPathObj.Parameters != nil {
_, paramObj = findParamByName(existingPathObj.Parameters, openapi.InPath, pathChunk[1:len(pathChunk)-1])
}
if paramObj == nil {
@@ -46,34 +60,30 @@ func (n *Node) getOrSet(path NodePath, pathObjToSet *openapi.PathObj) (node *Nod
n.children = append(n.children, node)
if paramObj != nil {
node.param = paramObj
node.pathParam = paramObj
} else if chunkIsGibberish {
initParams(&pathObjToSet.Parameters)
newParam := n.createParam()
node.param = newParam
appended := append(*pathObjToSet.Parameters, newParam)
pathObjToSet.Parameters = &appended
node.pathParam = newParam
} else {
node.constant = &pathChunk
}
}
// add example if it's a param
if node.param != nil && !chunkIsParam {
exmp := &node.param.Examples
// add example if it's a gibberish chunk
if node.pathParam != nil && !chunkIsParam {
exmp := &node.pathParam.Examples
err := fillParamExample(&exmp, pathChunk)
if err != nil {
logger.Log.Warningf("Failed to add example to a parameter: %s", err)
}
}
// TODO: eat up trailing slash, in a smart way: node.ops!=nil && path[1]==""
// TODO: eat up trailing slash, in a smart way: node.pathObj!=nil && path[1]==""
if len(path) > 1 {
return node.getOrSet(path[1:], pathObjToSet)
} else if node.ops == nil {
node.ops = pathObjToSet
return node.getOrSet(path[1:], existingPathObj)
} else if node.pathObj == nil {
node.pathObj = existingPathObj
}
return node
@@ -82,18 +92,24 @@ func (n *Node) getOrSet(path NodePath, pathObjToSet *openapi.PathObj) (node *Nod
func (n *Node) createParam() *openapi.ParameterObj {
name := "param"
// REST assumption, not always correct
if strings.HasSuffix(*n.constant, "es") && len(*n.constant) > 4 {
name = *n.constant
name = name[:len(name)-2] + "Id"
} else if strings.HasSuffix(*n.constant, "s") && len(*n.constant) > 3 {
name = *n.constant
name = name[:len(name)-1] + "Id"
if n.constant != nil { // the node is already a param
// REST assumption, not always correct
if strings.HasSuffix(*n.constant, "es") && len(*n.constant) > 4 {
name = *n.constant
name = name[:len(name)-2] + "Id"
} else if strings.HasSuffix(*n.constant, "s") && len(*n.constant) > 3 {
name = *n.constant
name = name[:len(name)-1] + "Id"
} else if isAlpha(*n.constant) {
name = *n.constant + "Id"
}
name = cleanNonAlnum([]byte(name))
}
newParam := createSimpleParam(name, "path", "string")
x := n.countParentParams()
if x > 1 {
if x > 0 {
newParam.Name = newParam.Name + strconv.Itoa(x)
}
@@ -111,7 +127,7 @@ func (n *Node) searchInParams(paramObj *openapi.ParameterObj, chunkIsGibberish b
// TODO: check the regex pattern of param? for exceptions etc
if paramObj != nil {
// TODO: mergeParam(subnode.param, paramObj)
// TODO: mergeParam(subnode.pathParam, paramObj)
return subnode
} else {
return subnode
@@ -145,15 +161,14 @@ func (n *Node) listPaths() *openapi.Paths {
var strChunk string
if n.constant != nil {
strChunk = *n.constant
} else if n.param != nil {
strChunk = "{" + n.param.Name + "}"
} else {
// this is the root node
}
} else if n.pathParam != nil {
strChunk = "{" + n.pathParam.Name + "}"
} // else -> this is the root node
// add self
if n.ops != nil {
paths.Items[openapi.PathValue(strChunk)] = n.ops
if n.pathObj != nil {
fillPathParams(n, n.pathObj)
paths.Items[openapi.PathValue(strChunk)] = n.pathObj
}
// recurse into children
@@ -173,6 +188,29 @@ func (n *Node) listPaths() *openapi.Paths {
return paths
}
func fillPathParams(n *Node, pathObj *openapi.PathObj) {
// collect all path parameters from parent hierarchy
node := n
for {
if node.pathParam != nil {
initParams(&pathObj.Parameters)
idx, paramObj := findParamByName(pathObj.Parameters, openapi.InPath, node.pathParam.Name)
if paramObj == nil {
appended := append(*pathObj.Parameters, node.pathParam)
pathObj.Parameters = &appended
} else {
(*pathObj.Parameters)[idx] = paramObj
}
}
node = node.parent
if node == nil {
break
}
}
}
type PathAndOp struct {
path string
op *openapi.Operation
@@ -192,7 +230,7 @@ func (n *Node) countParentParams() int {
res := 0
node := n
for {
if node.param != nil {
if node.pathParam != nil {
res++
}

View File

@@ -0,0 +1,37 @@
package oas
import (
"github.com/chanced/openapi"
"strings"
"testing"
)
func TestTree(t *testing.T) {
testCases := []struct {
inp string
numParams int
label string
}{
{"/", 0, ""},
{"/v1.0.0/config/launcher/sp_nKNHCzsN/f34efcae-6583-11eb-908a-00b0fcb9d4f6/vendor,init,conversation", 1, "vendor,init,conversation"},
{"/v1.0.0/config/launcher/sp_nKNHCzsN/{f34efcae-6583-11eb-908a-00b0fcb9d4f6}/vendor,init,conversation", 0, "vendor,init,conversation"},
{"/getSvgs/size/small/brand/SFLY/layoutId/170943/layoutVersion/1/sizeId/742/surface/0/isLandscape/true/childSkus/%7B%7D", 1, ""},
}
tree := new(Node)
for _, tc := range testCases {
split := strings.Split(tc.inp, "/")
pathObj := new(openapi.PathObj)
node := tree.getOrSet(split, pathObj)
fillPathParams(node, pathObj)
if node.constant != nil && *node.constant != tc.label {
t.Errorf("Constant does not match: %s != %s", *node.constant, tc.label)
}
if tc.numParams > 0 && (pathObj.Parameters == nil || len(*pathObj.Parameters) < tc.numParams) {
t.Errorf("Wrong num of params, expected: %d", tc.numParams)
}
}
}

View File

@@ -3,11 +3,13 @@ package oas
import (
"encoding/json"
"errors"
"github.com/chanced/openapi"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared/logger"
"strconv"
"strings"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/chanced/openapi"
"github.com/up9inc/mizu/shared/logger"
)
func exampleResolver(ref string) (*openapi.ExampleObj, error) {
@@ -32,16 +34,14 @@ func headerResolver(ref string) (*openapi.HeaderObj, error) {
func initParams(obj **openapi.ParameterList) {
if *obj == nil {
var params openapi.ParameterList
params = make([]openapi.Parameter, 0)
var params openapi.ParameterList = make([]openapi.Parameter, 0)
*obj = &params
}
}
func initHeaders(respObj *openapi.ResponseObj) {
if respObj.Headers == nil {
var created openapi.Headers
created = map[string]openapi.Header{}
var created openapi.Headers = map[string]openapi.Header{}
respObj.Headers = created
}
}
@@ -71,9 +71,10 @@ func createSimpleParam(name string, in openapi.In, ptype openapi.SchemaType) *op
return &newParam
}
func findParamByName(params *openapi.ParameterList, in openapi.In, name string) (pathParam *openapi.ParameterObj) {
func findParamByName(params *openapi.ParameterList, in openapi.In, name string) (idx int, pathParam *openapi.ParameterObj) {
caseInsensitive := in == openapi.InHeader
for _, param := range *params {
for i, param := range *params {
idx = i
paramObj, err := param.ResolveParameter(paramResolver)
if err != nil {
logger.Log.Warningf("Failed to resolve reference: %s", err)
@@ -84,12 +85,13 @@ func findParamByName(params *openapi.ParameterList, in openapi.In, name string)
continue
}
if paramObj.Name == name || (caseInsensitive && strings.ToLower(paramObj.Name) == strings.ToLower(name)) {
if paramObj.Name == name || (caseInsensitive && strings.EqualFold(paramObj.Name, name)) {
pathParam = paramObj
break
}
}
return pathParam
return idx, pathParam
}
func findHeaderByName(headers *openapi.Headers, name string) *openapi.HeaderObj {
@@ -100,44 +102,23 @@ func findHeaderByName(headers *openapi.Headers, name string) *openapi.HeaderObj
continue
}
if strings.ToLower(hname) == strings.ToLower(name) {
if strings.EqualFold(hname, name) {
return hdrObj
}
}
return nil
}
type NVPair struct {
Name string
Value string
}
type nvParams struct {
In openapi.In
Pairs func() []NVPair
Pairs []har.NVP
IsIgnored func(name string) bool
GeneralizeName func(name string) string
}
func qstrToNVP(list []har.QueryString) []NVPair {
res := make([]NVPair, len(list))
for idx, val := range list {
res[idx] = NVPair{Name: val.Name, Value: val.Value}
}
return res
}
func hdrToNVP(list []har.Header) []NVPair {
res := make([]NVPair, len(list))
for idx, val := range list {
res[idx] = NVPair{Name: val.Name, Value: val.Value}
}
return res
}
func handleNameVals(gw nvParams, params **openapi.ParameterList) {
visited := map[string]*openapi.ParameterObj{}
for _, pair := range gw.Pairs() {
for _, pair := range gw.Pairs {
if gw.IsIgnored(pair.Name) {
continue
}
@@ -145,7 +126,7 @@ func handleNameVals(gw nvParams, params **openapi.ParameterList) {
nameGeneral := gw.GeneralizeName(pair.Name)
initParams(params)
param := findParamByName(*params, gw.In, pair.Name)
_, param := findParamByName(*params, gw.In, pair.Name)
if param == nil {
param = createSimpleParam(nameGeneral, gw.In, openapi.TypeString)
appended := append(**params, param)
@@ -342,3 +323,26 @@ func anyJSON(text string) (anyVal interface{}, isJSON bool) {
return nil, false
}
func cleanNonAlnum(s []byte) string {
j := 0
for _, b := range s {
if ('a' <= b && b <= 'z') ||
('A' <= b && b <= 'Z') ||
('0' <= b && b <= '9') ||
b == ' ' {
s[j] = b
j++
}
}
return string(s[:j])
}
func isAlpha(s string) bool {
for _, r := range s {
if (r < 'a' || r > 'z') && (r < 'A' || r > 'Z') {
return false
}
}
return true
}

View File

@@ -3,7 +3,10 @@ package providers
import (
"context"
"errors"
"mizuserver/pkg/config"
"github.com/up9inc/mizu/agent/pkg/config"
"github.com/up9inc/mizu/shared/logger"
ory "github.com/ory/kratos-client-go"
)
@@ -38,7 +41,9 @@ func CreateAdminUser(password string, ctx context.Context) (token *string, err e
if err != nil {
//Delete the user to prevent a half-setup situation where admin user is created without admin privileges
DeleteUser(identityId, ctx)
if err := DeleteUser(identityId, ctx); err != nil {
logger.Log.Error(err)
}
return nil, err, nil
}

View File

@@ -7,6 +7,7 @@ import (
type GeneralStats struct {
EntriesCount int
EntriesVolumeInGB float64
FirstEntryTimestamp int
LastEntryTimestamp int
}
@@ -21,8 +22,9 @@ func GetGeneralStats() GeneralStats {
return generalStats
}
func EntryAdded() {
func EntryAdded(size int) {
generalStats.EntriesCount++
generalStats.EntriesVolumeInGB += float64(size) / (1 << 30)
currentTimestamp := int(time.Now().Unix())
@@ -32,5 +34,3 @@ func EntryAdded() {
generalStats.LastEntryTimestamp = currentTimestamp
}

View File

@@ -2,8 +2,9 @@ package providers_test
import (
"fmt"
"mizuserver/pkg/providers"
"testing"
"github.com/up9inc/mizu/agent/pkg/providers"
)
func TestNoEntryAddedCount(t *testing.T) {
@@ -12,6 +13,10 @@ func TestNoEntryAddedCount(t *testing.T) {
if entriesStats.EntriesCount != 0 {
t.Errorf("unexpected result - expected: %v, actual: %v", 0, entriesStats.EntriesCount)
}
if entriesStats.EntriesVolumeInGB != 0 {
t.Errorf("unexpected result - expected: %v, actual: %v", 0, entriesStats.EntriesVolumeInGB)
}
}
func TestEntryAddedCount(t *testing.T) {
@@ -20,7 +25,7 @@ func TestEntryAddedCount(t *testing.T) {
for _, entriesCount := range tests {
t.Run(fmt.Sprintf("%d", entriesCount), func(t *testing.T) {
for i := 0; i < entriesCount; i++ {
providers.EntryAdded()
providers.EntryAdded(0)
}
entriesStats := providers.GetGeneralStats()
@@ -29,7 +34,38 @@ func TestEntryAddedCount(t *testing.T) {
t.Errorf("unexpected result - expected: %v, actual: %v", entriesCount, entriesStats.EntriesCount)
}
if entriesStats.EntriesVolumeInGB != 0 {
t.Errorf("unexpected result - expected: %v, actual: %v", 0, entriesStats.EntriesVolumeInGB)
}
t.Cleanup(providers.ResetGeneralStats)
})
}
}
func TestEntryAddedVolume(t *testing.T) {
// 6 bytes + 4 bytes
tests := [][]byte{[]byte("volume"), []byte("test")}
var expectedEntriesCount int
var expectedVolumeInGB float64
for _, data := range tests {
t.Run(fmt.Sprintf("%d", len(data)), func(t *testing.T) {
expectedEntriesCount++
expectedVolumeInGB += float64(len(data)) / (1 << 30)
providers.EntryAdded(len(data))
entriesStats := providers.GetGeneralStats()
if entriesStats.EntriesCount != expectedEntriesCount {
t.Errorf("unexpected result - expected: %v, actual: %v", expectedEntriesCount, entriesStats.EntriesCount)
}
if entriesStats.EntriesVolumeInGB != expectedVolumeInGB {
t.Errorf("unexpected result - expected: %v, actual: %v", expectedVolumeInGB, entriesStats.EntriesVolumeInGB)
}
})
}
}

View File

@@ -3,12 +3,13 @@ package providers
import (
"encoding/json"
"fmt"
"github.com/patrickmn/go-cache"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
"mizuserver/pkg/models"
"os"
"time"
"github.com/patrickmn/go-cache"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
)
const tlsLinkRetainmentTime = time.Minute * 15

View File

@@ -1,12 +1,13 @@
package tapConfig
import (
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
"mizuserver/pkg/models"
"mizuserver/pkg/utils"
"os"
"sync"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
)
const FilePath = shared.DataDirPath + "tap-config.json"

View File

@@ -1,13 +1,14 @@
package tappedPods
import (
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
"mizuserver/pkg/providers/tappers"
"mizuserver/pkg/utils"
"os"
"strings"
"sync"
"github.com/up9inc/mizu/agent/pkg/providers/tappers"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
)
const FilePath = shared.DataDirPath + "tapped-pods.json"

View File

@@ -1,11 +1,12 @@
package tappers
import (
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
"mizuserver/pkg/utils"
"os"
"sync"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
)
const FilePath = shared.DataDirPath + "tappers-status.json"

View File

@@ -85,11 +85,11 @@ func DeleteUser(identityId string, ctx context.Context) error {
return err
}
if result == nil {
return errors.New("unknown error occured during user deletion")
return fmt.Errorf("unknown error occured during user deletion %v", identityId)
}
if result.StatusCode < 200 || result.StatusCode > 299 {
return errors.New(fmt.Sprintf("user deletion returned bad status %d", result.StatusCode))
return fmt.Errorf("user deletion %v returned bad status %d", identityId, result.StatusCode)
} else {
return nil
}

View File

@@ -15,7 +15,8 @@ more on keto here: https://www.ory.sh/keto/docs/
import (
"fmt"
"mizuserver/pkg/utils"
"github.com/up9inc/mizu/agent/pkg/utils"
ketoClient "github.com/ory/keto-client-go/client"
ketoRead "github.com/ory/keto-client-go/client/read"

View File

@@ -1,8 +1,8 @@
package routes
import (
"mizuserver/pkg/controllers"
"mizuserver/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/gin-gonic/gin"
)

View File

@@ -1,8 +1,8 @@
package routes
import (
"mizuserver/pkg/controllers"
"mizuserver/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/gin-gonic/gin"
)

View File

@@ -1,7 +1,7 @@
package routes
import (
"mizuserver/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/gin-gonic/gin"
)

View File

@@ -1,7 +1,7 @@
package routes
import (
"mizuserver/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/gin-gonic/gin"
)

View File

@@ -1,18 +0,0 @@
package routes
import (
"github.com/gin-gonic/gin"
"net/http"
)
// NotFoundRoute defines the 404 Error route.
func NotFoundRoute(app *gin.Engine) {
app.Use(
func(c *gin.Context) {
c.JSON(http.StatusNotFound, map[string]interface{}{
"error": true,
"msg": "sorry, endpoint is not found",
})
},
)
}

View File

@@ -1,8 +1,8 @@
package routes
import (
"mizuserver/pkg/controllers"
"mizuserver/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/gin-gonic/gin"
)

View File

@@ -1,8 +1,8 @@
package routes
import (
"mizuserver/pkg/controllers"
"mizuserver/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/gin-gonic/gin"
)

View File

@@ -1,8 +1,8 @@
package routes
import (
"mizuserver/pkg/controllers"
"mizuserver/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/gin-gonic/gin"
)

View File

@@ -1,8 +1,8 @@
package routes
import (
"mizuserver/pkg/controllers"
"mizuserver/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/gin-gonic/gin"
)

View File

@@ -1,7 +1,7 @@
package routes
import (
"mizuserver/pkg/controllers"
"github.com/up9inc/mizu/agent/pkg/controllers"
"github.com/gin-gonic/gin"
)

View File

@@ -8,11 +8,12 @@ import (
"regexp"
"strings"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/shared/logger"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared"
jsonpath "github.com/yalp/jsonpath"
"github.com/yalp/jsonpath"
)
type RulesMatched struct {
@@ -108,7 +109,7 @@ func PassedValidationRules(rulesMatched []RulesMatched) (bool, int64, int) {
}
for _, rule := range rulesMatched {
if rule.Matched == false {
if !rule.Matched {
return false, responseTime, numberOfRulesMatched
} else {
if strings.ToLower(rule.Rule.Type) == "slo" {

View File

@@ -3,11 +3,9 @@ package up9
import (
"bytes"
"compress/zlib"
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"mizuserver/pkg/utils"
"net/http"
"net/url"
"regexp"
@@ -15,7 +13,9 @@ import (
"sync"
"time"
"github.com/google/martian/har"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/utils"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
@@ -247,7 +247,7 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
if err := json.Unmarshal([]byte(dataBytes), &entry); err != nil {
continue
}
harEntry, err := utils.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
harEntry, err := har.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
if err != nil {
continue
}
@@ -259,11 +259,6 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
harEntry.Request.URL = utils.SetHostname(harEntry.Request.URL, entry.Destination.Name)
}
// go's default marshal behavior is to encode []byte fields to base64, python's default unmarshal behavior is to not decode []byte fields from base64
if harEntry.Response.Content.Text, err = base64.StdEncoding.DecodeString(string(harEntry.Response.Content.Text)); err != nil {
continue
}
batch = append(batch, *harEntry)
now := time.Now()

View File

@@ -31,10 +31,14 @@ func StartServer(app *gin.Engine) {
}
go func() {
_ = <-signals
<-signals
logger.Log.Infof("Shutting down...")
ctx, _ := context.WithTimeout(context.Background(), 5*time.Second)
_ = srv.Shutdown(ctx)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
err := srv.Shutdown(ctx)
if err != nil {
logger.Log.Errorf("%v", err)
}
os.Exit(0)
}()
@@ -91,7 +95,7 @@ func UniqueStringSlice(s []string) []string {
uniqueMap := map[string]bool{}
for _, val := range s {
if uniqueMap[val] == true {
if uniqueMap[val] {
continue
}
uniqueMap[val] = true

View File

@@ -30,15 +30,12 @@ build: ## Build mizu CLI binary (select platform via GOOS / GOARCH env variables
build-all: ## Build for all supported platforms.
@echo "Compiling for every OS and Platform"
@mkdir -p bin && sed s/_SEM_VER_/$(SEM_VER)/g README.md.TEMPLATE > bin/README.md
@$(MAKE) build GOOS=darwin GOARCH=amd64
@$(MAKE) build GOOS=linux GOARCH=amd64
@$(MAKE) build GOOS=linux GOARCH=arm64
@$(MAKE) build GOOS=darwin GOARCH=amd64
@$(MAKE) build GOOS=darwin GOARCH=arm64
@$(MAKE) build GOOS=windows GOARCH=amd64
@mv ./bin/mizu_windows_amd64 ./bin/mizu.exe
@# $(MAKE) GOOS=linux GOARCH=386
@# $(MAKE) GOOS=windows GOARCH=386
@# $(MAKE) GOOS=linux GOARCH=arm64
@# $(MAKE) GOOS=windows GOARCH=arm64
@echo "---------"
@find ./bin -ls

View File

@@ -3,22 +3,27 @@ Full changelog for stable release see in [docs](https://github.com/up9inc/mizu/b
## Download Mizu for your platform
**Mac** (Intel)
**Mac** (x86-64/Intel)
```
curl -Lo mizu https://github.com/up9inc/mizu/releases/download/_SEM_VER_/mizu_darwin_amd64 && chmod 755 mizu
```
**Mac** (Apple M1 silicon)
**Mac** (AArch64/Apple M1 silicon)
```
curl -Lo mizu https://github.com/up9inc/mizu/releases/download/_SEM_VER_/mizu_darwin_arm64 && chmod 755 mizu
```
**Linux**
**Linux** (x86-64)
```
curl -Lo mizu https://github.com/up9inc/mizu/releases/download/_SEM_VER_/mizu_linux_amd64 && chmod 755 mizu
```
**Windows** (Intel 64bit)
**Linux** (AArch64)
```
curl -Lo mizu https://github.com/up9inc/mizu/releases/download/_SEM_VER_/mizu_linux_arm64 && chmod 755 mizu
```
**Windows** (x86-64)
```
curl -LO https://github.com/up9inc/mizu/releases/download/_SEM_VER_/mizu.exe
```

View File

@@ -23,8 +23,8 @@ type Provider struct {
client *http.Client
}
const DefaultRetries = 20
const DefaultTimeout = 5 * time.Second
const DefaultRetries = 3
const DefaultTimeout = 2 * time.Second
func NewProvider(url string, retries int, timeout time.Duration) *Provider {
return &Provider{
@@ -36,16 +36,6 @@ func NewProvider(url string, retries int, timeout time.Duration) *Provider {
}
}
func NewProviderWithoutRetries(url string, timeout time.Duration) *Provider {
return &Provider{
url: url,
retries: 1,
client: &http.Client{
Timeout: timeout,
},
}
}
func (provider *Provider) TestConnection() error {
retriesLeft := provider.retries
for retriesLeft > 0 {

View File

@@ -3,13 +3,14 @@ package cmd
import (
"context"
"fmt"
"regexp"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared/kubernetes"
"github.com/up9inc/mizu/shared/logger"
"github.com/up9inc/mizu/shared/semver"
"net/http"
)
func runMizuCheck() {
@@ -34,7 +35,7 @@ func runMizuCheck() {
}
if checkPassed {
checkPassed = checkServerConnection(kubernetesProvider, cancel)
checkPassed = checkServerConnection(kubernetesProvider)
}
if checkPassed {
@@ -91,23 +92,75 @@ func checkKubernetesVersion(kubernetesVersion *semver.SemVersion) bool {
return true
}
func checkServerConnection(kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) bool {
func checkServerConnection(kubernetesProvider *kubernetes.Provider) bool {
logger.Log.Infof("\nmizu-connectivity\n--------------------")
serverUrl := GetApiServerUrl()
if response, err := http.Get(fmt.Sprintf("%s/", serverUrl)); err != nil || response.StatusCode != 200 {
startProxyReportErrorIfAny(kubernetesProvider, cancel)
apiServerProvider := apiserver.NewProvider(serverUrl, 1, apiserver.DefaultTimeout)
if err := apiServerProvider.TestConnection(); err == nil {
logger.Log.Infof("%v found Mizu server tunnel available and connected successfully to API server", fmt.Sprintf(uiUtils.Green, "√"))
return true
}
connectedToApiServer := false
if err := checkProxy(serverUrl, kubernetesProvider); err != nil {
logger.Log.Errorf("%v couldn't connect to API server using proxy, err: %v", fmt.Sprintf(uiUtils.Red, "✗"), err)
} else {
connectedToApiServer = true
logger.Log.Infof("%v connected successfully to API server using proxy", fmt.Sprintf(uiUtils.Green, "√"))
}
if err := checkPortForward(serverUrl, kubernetesProvider); err != nil {
logger.Log.Errorf("%v couldn't connect to API server using port-forward, err: %v", fmt.Sprintf(uiUtils.Red, "✗"), err)
} else {
connectedToApiServer = true
logger.Log.Infof("%v connected successfully to API server using port-forward", fmt.Sprintf(uiUtils.Green, "√"))
}
return connectedToApiServer
}
func checkProxy(serverUrl string, kubernetesProvider *kubernetes.Provider) error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
httpServer, err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.ProxyHost, config.Config.Tap.GuiPort, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName, cancel)
if err != nil {
return err
}
apiServerProvider := apiserver.NewProvider(serverUrl, apiserver.DefaultRetries, apiserver.DefaultTimeout)
if err := apiServerProvider.TestConnection(); err != nil {
logger.Log.Errorf("%v couldn't connect to API server, err: %v", fmt.Sprintf(uiUtils.Red, "✗"), err)
return false
return err
}
logger.Log.Infof("%v connected successfully to API server", fmt.Sprintf(uiUtils.Green, "√"))
return true
if err := httpServer.Shutdown(ctx); err != nil {
logger.Log.Debugf("Error occurred while stopping proxy, err: %v", err)
}
return nil
}
func checkPortForward(serverUrl string, kubernetesProvider *kubernetes.Provider) error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
podRegex, _ := regexp.Compile(kubernetes.ApiServerPodName)
forwarder, err := kubernetes.NewPortForward(kubernetesProvider, config.Config.MizuResourcesNamespace, podRegex, config.Config.Tap.GuiPort, ctx, cancel)
if err != nil {
return err
}
apiServerProvider := apiserver.NewProvider(serverUrl, apiserver.DefaultRetries, apiserver.DefaultTimeout)
if err := apiServerProvider.TestConnection(); err != nil {
return err
}
forwarder.Close()
return nil
}
func checkAllResourcesExist(ctx context.Context, kubernetesProvider *kubernetes.Provider, isInstallCommand bool) bool {
@@ -117,32 +170,32 @@ func checkAllResourcesExist(ctx context.Context, kubernetesProvider *kubernetes.
allResourcesExist := checkResourceExist(config.Config.MizuResourcesNamespace, "namespace", exist, err)
exist, err = kubernetesProvider.DoesConfigMapExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.ConfigMapName)
allResourcesExist = checkResourceExist(kubernetes.ConfigMapName, "config map", exist, err)
allResourcesExist = checkResourceExist(kubernetes.ConfigMapName, "config map", exist, err) && allResourcesExist
exist, err = kubernetesProvider.DoesServiceAccountExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.ServiceAccountName)
allResourcesExist = checkResourceExist(kubernetes.ServiceAccountName, "service account", exist, err)
allResourcesExist = checkResourceExist(kubernetes.ServiceAccountName, "service account", exist, err) && allResourcesExist
if config.Config.IsNsRestrictedMode() {
exist, err = kubernetesProvider.DoesRoleExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.RoleName)
allResourcesExist = checkResourceExist(kubernetes.RoleName, "role", exist, err)
allResourcesExist = checkResourceExist(kubernetes.RoleName, "role", exist, err) && allResourcesExist
exist, err = kubernetesProvider.DoesRoleBindingExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.RoleBindingName)
allResourcesExist = checkResourceExist(kubernetes.RoleBindingName, "role binding", exist, err)
allResourcesExist = checkResourceExist(kubernetes.RoleBindingName, "role binding", exist, err) && allResourcesExist
} else {
exist, err = kubernetesProvider.DoesClusterRoleExist(ctx, kubernetes.ClusterRoleName)
allResourcesExist = checkResourceExist(kubernetes.ClusterRoleName, "cluster role", exist, err)
allResourcesExist = checkResourceExist(kubernetes.ClusterRoleName, "cluster role", exist, err) && allResourcesExist
exist, err = kubernetesProvider.DoesClusterRoleBindingExist(ctx, kubernetes.ClusterRoleBindingName)
allResourcesExist = checkResourceExist(kubernetes.ClusterRoleBindingName, "cluster role binding", exist, err)
allResourcesExist = checkResourceExist(kubernetes.ClusterRoleBindingName, "cluster role binding", exist, err) && allResourcesExist
}
exist, err = kubernetesProvider.DoesServiceExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName)
allResourcesExist = checkResourceExist(kubernetes.ApiServerPodName, "service", exist, err)
allResourcesExist = checkResourceExist(kubernetes.ApiServerPodName, "service", exist, err) && allResourcesExist
if isInstallCommand {
allResourcesExist = checkInstallResourcesExist(ctx, kubernetesProvider)
allResourcesExist = checkInstallResourcesExist(ctx, kubernetesProvider) && allResourcesExist
} else {
allResourcesExist = checkTapResourcesExist(ctx, kubernetesProvider)
allResourcesExist = checkTapResourcesExist(ctx, kubernetesProvider) && allResourcesExist
}
return allResourcesExist
@@ -153,13 +206,13 @@ func checkInstallResourcesExist(ctx context.Context, kubernetesProvider *kuberne
installResourcesExist := checkResourceExist(kubernetes.DaemonRoleName, "role", exist, err)
exist, err = kubernetesProvider.DoesRoleBindingExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.DaemonRoleBindingName)
installResourcesExist = checkResourceExist(kubernetes.DaemonRoleBindingName, "role binding", exist, err)
installResourcesExist = checkResourceExist(kubernetes.DaemonRoleBindingName, "role binding", exist, err) && installResourcesExist
exist, err = kubernetesProvider.DoesPersistentVolumeClaimExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.PersistentVolumeClaimName)
installResourcesExist = checkResourceExist(kubernetes.PersistentVolumeClaimName, "persistent volume claim", exist, err)
installResourcesExist = checkResourceExist(kubernetes.PersistentVolumeClaimName, "persistent volume claim", exist, err) && installResourcesExist
exist, err = kubernetesProvider.DoesDeploymentExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName)
installResourcesExist = checkResourceExist(kubernetes.ApiServerPodName, "deployment", exist, err)
installResourcesExist = checkResourceExist(kubernetes.ApiServerPodName, "deployment", exist, err) && installResourcesExist
return installResourcesExist
}

View File

@@ -5,6 +5,10 @@ import (
"encoding/json"
"errors"
"fmt"
"path"
"regexp"
"time"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
@@ -13,8 +17,6 @@ import (
"github.com/up9inc/mizu/cli/resources"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"path"
"time"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/shared/kubernetes"
@@ -25,7 +27,7 @@ func GetApiServerUrl() string {
return fmt.Sprintf("http://%s", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort))
}
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx context.Context, cancel context.CancelFunc) {
httpServer, err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.ProxyHost, config.Config.Tap.GuiPort, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName, cancel)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running k8s proxy %v\n"+
@@ -34,21 +36,22 @@ func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, cancel
return
}
apiProvider = apiserver.NewProviderWithoutRetries(GetApiServerUrl(), time.Second) // short check for proxy
apiProvider = apiserver.NewProvider(GetApiServerUrl(), apiserver.DefaultRetries, apiserver.DefaultTimeout)
if err := apiProvider.TestConnection(); err != nil {
logger.Log.Debugf("Couldn't connect using proxy, stopping proxy and trying to create port-forward")
if err := httpServer.Shutdown(context.Background()); err != nil {
if err := httpServer.Shutdown(ctx); err != nil {
logger.Log.Debugf("Error occurred while stopping proxy %v", errormessage.FormatError(err))
}
if err := kubernetes.NewPortForward(kubernetesProvider, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName, config.Config.Tap.GuiPort, cancel); err != nil {
podRegex, _ := regexp.Compile(kubernetes.ApiServerPodName)
if _, err := kubernetes.NewPortForward(kubernetesProvider, config.Config.MizuResourcesNamespace, podRegex, config.Config.Tap.GuiPort, ctx, cancel); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running port forward %v\n"+
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
cancel()
return
}
apiProvider = apiserver.NewProvider(GetApiServerUrl(), apiserver.DefaultRetries, apiserver.DefaultTimeout) // long check for port-forward
apiProvider = apiserver.NewProvider(GetApiServerUrl(), apiserver.DefaultRetries, apiserver.DefaultTimeout)
if err := apiProvider.TestConnection(); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Couldn't connect to API server, for more info check logs at %s", fsUtils.GetLogFilePath()))
cancel()

View File

@@ -50,7 +50,9 @@ func init() {
rootCmd.AddCommand(configCmd)
defaultConfig := config.ConfigStruct{}
defaults.Set(&defaultConfig)
if err := defaults.Set(&defaultConfig); err != nil {
logger.Log.Debug(err)
}
configCmd.Flags().BoolP(configStructs.RegenerateConfigName, "r", defaultConfig.Config.Regenerate, fmt.Sprintf("Regenerate the config file with default values to path %s or to chosen path using --%s", defaultConfig.ConfigFilePath, config.ConfigFilePathCommandName))
}

View File

@@ -4,10 +4,6 @@ import (
"context"
"errors"
"fmt"
"github.com/up9inc/mizu/shared/kubernetes"
core "k8s.io/api/core/v1"
"regexp"
"time"
"github.com/creasty/defaults"
"github.com/up9inc/mizu/cli/config"
@@ -35,7 +31,9 @@ func runMizuInstall() {
var defaultMaxEntriesDBSizeBytes int64 = 200 * 1000 * 1000
defaultResources := shared.Resources{}
defaults.Set(&defaultResources)
if err := defaults.Set(&defaultResources); err != nil {
logger.Log.Debug(err)
}
mizuAgentConfig := getInstallMizuAgentConfig(defaultMaxEntriesDBSizeBytes, defaultResources)
serializedMizuConfig, err := getSerializedMizuAgentConfig(mizuAgentConfig)
@@ -46,7 +44,7 @@ func runMizuInstall() {
if err = resources.CreateInstallMizuResources(ctx, kubernetesProvider, serializedValidationRules,
serializedContract, serializedMizuConfig, config.Config.IsNsRestrictedMode(),
config.Config.MizuResourcesNamespace, config.Config.AgentImage, config.Config.BasenineImage,
config.Config.MizuResourcesNamespace, config.Config.AgentImage,
config.Config.KratosImage, config.Config.KetoImage,
nil, defaultMaxEntriesDBSizeBytes, defaultResources, config.Config.ImagePullPolicy(),
config.Config.LogLevel(), false); err != nil {
@@ -63,20 +61,6 @@ func runMizuInstall() {
return
}
logger.Log.Infof("Waiting for Mizu server to start...")
readyChan := make(chan string)
readyErrorChan := make(chan error)
go watchApiServerPodReady(ctx, kubernetesProvider, readyChan, readyErrorChan)
select {
case readyMessage := <-readyChan:
logger.Log.Infof(readyMessage)
case err := <-readyErrorChan:
defer resources.CleanUpMizuResources(ctx, cancel, kubernetesProvider, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace)
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("%v", errormessage.FormatError(err)))
return
}
logger.Log.Infof(uiUtils.Magenta, "Installation completed, run `mizu view` to connect to the mizu daemon instance")
}
@@ -92,63 +76,8 @@ func getInstallMizuAgentConfig(maxDBSizeBytes int64, tapperResources shared.Reso
StandaloneMode: true,
ServiceMap: config.Config.ServiceMap,
OAS: config.Config.OAS,
Elastic: config.Config.Elastic,
}
return &mizuAgentConfig
}
func watchApiServerPodReady(ctx context.Context, kubernetesProvider *kubernetes.Provider, readyChan chan string, readyErrorChan chan error) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s.*", kubernetes.ApiServerPodName))
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.MizuResourcesNamespace}, podWatchHelper)
apiServerTimeoutSec := config.GetIntEnvConfig(config.ApiServerTimeoutSec, 120)
timeAfter := time.After(time.Duration(apiServerTimeoutSec) * time.Second)
for {
select {
case wEvent, ok := <-eventChan:
if !ok {
eventChan = nil
continue
}
switch wEvent.Type {
case kubernetes.EventAdded:
logger.Log.Debugf("Watching API Server pod ready loop, added")
case kubernetes.EventDeleted:
logger.Log.Debugf("Watching API Server pod ready loop, %s removed", kubernetes.ApiServerPodName)
case kubernetes.EventModified:
modifiedPod, err := wEvent.ToPod()
if err != nil {
readyErrorChan <- err
return
}
logger.Log.Debugf("Watching API Server pod ready loop, modified: %v", modifiedPod.Status.Phase)
if modifiedPod.Status.Phase == core.PodRunning {
readyChan <- fmt.Sprintf("%v pod is running", modifiedPod.Name)
return
}
case kubernetes.EventBookmark:
break
case kubernetes.EventError:
break
}
case err, ok := <-errorChan:
if !ok {
errorChan = nil
continue
}
readyErrorChan <- fmt.Errorf("[ERROR] Agent creation, watching %v namespace, error: %v", config.Config.MizuResourcesNamespace, err)
return
case <-timeAfter:
readyErrorChan <- fmt.Errorf("mizu API server was not ready in time")
return
case <-ctx.Done():
logger.Log.Debugf("Watching API Server pod ready loop, ctx done")
return
}
}
}

View File

@@ -23,7 +23,7 @@ var logsCmd = &cobra.Command{
if err != nil {
return nil
}
ctx, _ := context.WithCancel(context.Background())
ctx := context.Background()
if validationErr := config.Config.Logs.Validate(); validationErr != nil {
return errormessage.FormatError(validationErr)
@@ -43,7 +43,9 @@ func init() {
rootCmd.AddCommand(logsCmd)
defaultLogsConfig := configStructs.LogsConfig{}
defaults.Set(&defaultLogsConfig)
if err := defaults.Set(&defaultLogsConfig); err != nil {
logger.Log.Debug(err)
}
logsCmd.Flags().StringP(configStructs.FileLogsName, "f", defaultLogsConfig.FileStr, "Path for zip file (default current <pwd>\\mizu_logs.zip)")
}

View File

@@ -30,7 +30,9 @@ Further info is available at https://github.com/up9inc/mizu`,
func init() {
defaultConfig := config.ConfigStruct{}
defaults.Set(&defaultConfig)
if err := defaults.Set(&defaultConfig); err != nil {
logger.Log.Debug(err)
}
rootCmd.PersistentFlags().StringSlice(config.SetCommandName, []string{}, fmt.Sprintf("Override values using --%s", config.SetCommandName))
rootCmd.PersistentFlags().String(config.ConfigFilePathCommandName, defaultConfig.ConfigFilePath, fmt.Sprintf("Override config file path using --%s", config.ConfigFilePathCommandName))

View File

@@ -104,7 +104,9 @@ func init() {
rootCmd.AddCommand(tapCmd)
defaultTapConfig := configStructs.TapConfig{}
defaults.Set(&defaultTapConfig)
if err := defaults.Set(&defaultTapConfig); err != nil {
logger.Log.Debug(err)
}
tapCmd.Flags().Uint16P(configStructs.GuiPortTapName, "p", defaultTapConfig.GuiPort, "Provide a custom port for the web interface webserver")
tapCmd.Flags().StringSliceP(configStructs.NamespacesTapName, "n", defaultTapConfig.Namespaces, "Namespaces selector")

View File

@@ -124,7 +124,7 @@ func RunMizuTap() {
}
logger.Log.Infof("Waiting for Mizu Agent to start...")
if state.mizuServiceAccountExists, err = resources.CreateTapMizuResources(ctx, kubernetesProvider, serializedValidationRules, serializedContract, serializedMizuConfig, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace, config.Config.AgentImage, config.Config.BasenineImage, getSyncEntriesConfig(), config.Config.Tap.MaxEntriesDBSizeBytes(), config.Config.Tap.ApiServerResources, config.Config.ImagePullPolicy(), config.Config.LogLevel()); err != nil {
if state.mizuServiceAccountExists, err = resources.CreateTapMizuResources(ctx, kubernetesProvider, serializedValidationRules, serializedContract, serializedMizuConfig, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace, config.Config.AgentImage, getSyncEntriesConfig(), config.Config.Tap.MaxEntriesDBSizeBytes(), config.Config.Tap.ApiServerResources, config.Config.ImagePullPolicy(), config.Config.LogLevel()); err != nil {
var statusError *k8serrors.StatusError
if errors.As(err, &statusError) {
if statusError.ErrStatus.Reason == metav1.StatusReasonAlreadyExists {
@@ -164,6 +164,7 @@ func getTapMizuAgentConfig() *shared.MizuAgentConfig {
AgentDatabasePath: shared.DataDirPath,
ServiceMap: config.Config.ServiceMap,
OAS: config.Config.OAS,
Elastic: config.Config.Elastic,
}
return &mizuAgentConfig
@@ -342,7 +343,7 @@ func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provi
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
isPodReady = true
postApiServerStarted(ctx, kubernetesProvider, cancel, err)
postApiServerStarted(ctx, kubernetesProvider, cancel)
}
case kubernetes.EventBookmark:
break
@@ -405,7 +406,7 @@ func watchApiServerEvents(ctx context.Context, kubernetesProvider *kubernetes.Pr
case "FailedScheduling", "Failed":
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Mizu API Server status: %s - %s", event.Reason, event.Note))
cancel()
break
}
case err, ok := <-errorChan:
if !ok {
@@ -421,11 +422,11 @@ func watchApiServerEvents(ctx context.Context, kubernetesProvider *kubernetes.Pr
}
}
func postApiServerStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc, err error) {
startProxyReportErrorIfAny(kubernetesProvider, cancel)
func postApiServerStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel)
options, _ := getMizuApiFilteringOptions()
if err = startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, *options, state.startTime); err != nil {
if err := startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, *options, state.startTime); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error starting mizu tapper syncer: %v", err))
cancel()
}

View File

@@ -36,7 +36,9 @@ func init() {
rootCmd.AddCommand(versionCmd)
defaultVersionConfig := configStructs.VersionConfig{}
defaults.Set(&defaultVersionConfig)
if err := defaults.Set(&defaultVersionConfig); err != nil {
logger.Log.Debug(err)
}
versionCmd.Flags().BoolP(configStructs.DebugInfoVersionName, "d", defaultVersionConfig.DebugInfo, "Provide all information about version")

View File

@@ -6,6 +6,7 @@ import (
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/shared/logger"
)
var viewCmd = &cobra.Command{
@@ -22,10 +23,14 @@ func init() {
rootCmd.AddCommand(viewCmd)
defaultViewConfig := configStructs.ViewConfig{}
defaults.Set(&defaultViewConfig)
if err := defaults.Set(&defaultViewConfig); err != nil {
logger.Log.Debug(err)
}
viewCmd.Flags().Uint16P(configStructs.GuiPortViewName, "p", defaultViewConfig.GuiPort, "Provide a custom port for the web interface webserver")
viewCmd.Flags().StringP(configStructs.UrlViewName, "u", defaultViewConfig.Url, "Provide a custom host")
viewCmd.Flags().MarkHidden(configStructs.UrlViewName)
if err := viewCmd.Flags().MarkHidden(configStructs.UrlViewName); err != nil {
logger.Log.Debug(err)
}
}

View File

@@ -47,7 +47,7 @@ func runMizuView() {
return
}
logger.Log.Infof("Establishing connection to k8s cluster...")
startProxyReportErrorIfAny(kubernetesProvider, cancel)
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel)
}
apiServerProvider := apiserver.NewProvider(url, apiserver.DefaultRetries, apiserver.DefaultTimeout)

View File

@@ -28,7 +28,6 @@ type ConfigStruct struct {
Auth configStructs.AuthConfig `yaml:"auth"`
Config configStructs.ConfigConfig `yaml:"config,omitempty"`
AgentImage string `yaml:"agent-image,omitempty" readonly:""`
BasenineImage string `yaml:"basenine-image,omitempty" readonly:""`
KratosImage string `yaml:"kratos-image,omitempty" readonly:""`
KetoImage string `yaml:"keto-image,omitempty" readonly:""`
ImagePullPolicyStr string `yaml:"image-pull-policy" default:"Always"`
@@ -39,8 +38,9 @@ type ConfigStruct struct {
ConfigFilePath string `yaml:"config-path,omitempty" readonly:""`
HeadlessMode bool `yaml:"headless" default:"false"`
LogLevelStr string `yaml:"log-level,omitempty" default:"INFO" readonly:""`
ServiceMap bool `yaml:"service-map,omitempty" default:"false" readonly:""`
ServiceMap bool `yaml:"service-map" default:"false"`
OAS bool `yaml:"oas,omitempty" default:"false" readonly:""`
Elastic shared.ElasticConfig `yaml:"elastic"`
}
func (config *ConfigStruct) validate() error {
@@ -52,10 +52,9 @@ func (config *ConfigStruct) validate() error {
}
func (config *ConfigStruct) SetDefaults() {
config.BasenineImage = fmt.Sprintf("%s:%s", shared.BasenineImageRepo, shared.BasenineImageTag)
config.KratosImage = shared.KratosImageDefault
config.KetoImage = shared.KetoImageDefault
config.AgentImage = fmt.Sprintf("gcr.io/up9-docker-hub/mizu/%s:%s", mizu.Branch, mizu.SemVer)
config.AgentImage = fmt.Sprintf("%s:%s", shared.MizuAgentImageRepo, mizu.SemVer)
config.ConfigFilePath = path.Join(mizu.GetMizuFolderPath(), "config.yaml")
}

View File

@@ -60,12 +60,12 @@ func (config *TapConfig) MaxEntriesDBSizeBytes() int64 {
func (config *TapConfig) Validate() error {
_, compileErr := regexp.Compile(config.PodRegexStr)
if compileErr != nil {
return errors.New(fmt.Sprintf("%s is not a valid regex %s", config.PodRegexStr, compileErr))
return fmt.Errorf("%s is not a valid regex %s", config.PodRegexStr, compileErr)
}
_, parseHumanDataSizeErr := units.HumanReadableToBytes(config.HumanMaxEntriesDBSize)
if parseHumanDataSizeErr != nil {
return errors.New(fmt.Sprintf("Could not parse --%s value %s", HumanMaxEntriesDBSizeTapName, config.HumanMaxEntriesDBSize))
return fmt.Errorf("Could not parse --%s value %s", HumanMaxEntriesDBSizeTapName, config.HumanMaxEntriesDBSize)
}
if config.Workspace != "" {
@@ -76,7 +76,7 @@ func (config *TapConfig) Validate() error {
}
if config.Analysis && config.Workspace != "" {
return errors.New(fmt.Sprintf("Can't run with both --%s and --%s flags", AnalysisTapName, WorkspaceTapName))
return fmt.Errorf("Can't run with both --%s and --%s flags", AnalysisTapName, WorkspaceTapName)
}
return nil

View File

@@ -1,7 +1,6 @@
package fsUtils
import (
"errors"
"fmt"
"os"
)
@@ -18,7 +17,7 @@ func EnsureDir(dirName string) error {
return err
}
if !info.IsDir() {
return errors.New(fmt.Sprintf("path exists but is not a directory: %s", dirName))
return fmt.Errorf("path exists but is not a directory: %s", dirName)
}
return nil
}

View File

@@ -15,7 +15,7 @@ import (
"k8s.io/apimachinery/pkg/util/intstr"
)
func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedContract string, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, basenineImage string, syncEntriesConfig *shared.SyncEntriesConfig, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level) (bool, error) {
func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedContract string, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, syncEntriesConfig *shared.SyncEntriesConfig, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level) (bool, error) {
if !isNsRestrictedMode {
if err := createMizuNamespace(ctx, kubernetesProvider, mizuResourcesNamespace); err != nil {
return false, err
@@ -42,7 +42,6 @@ func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.
Namespace: mizuResourcesNamespace,
PodName: kubernetes.ApiServerPodName,
PodImage: agentImage,
BasenineImage: basenineImage,
KratosImage: "",
KetoImage: "",
ServiceAccountName: serviceAccountName,
@@ -68,7 +67,7 @@ func CreateTapMizuResources(ctx context.Context, kubernetesProvider *kubernetes.
return mizuServiceAccountExists, nil
}
func CreateInstallMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedContract string, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, basenineImage string, kratosImage string, ketoImage string, syncEntriesConfig *shared.SyncEntriesConfig, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level, noPersistentVolumeClaim bool) error {
func CreateInstallMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, serializedValidationRules string, serializedContract string, serializedMizuConfig string, isNsRestrictedMode bool, mizuResourcesNamespace string, agentImage string, kratosImage string, ketoImage string, syncEntriesConfig *shared.SyncEntriesConfig, maxEntriesDBSizeBytes int64, apiServerResources shared.Resources, imagePullPolicy core.PullPolicy, logLevel logging.Level, noPersistentVolumeClaim bool) error {
if err := createMizuNamespace(ctx, kubernetesProvider, mizuResourcesNamespace); err != nil {
return err
}
@@ -98,7 +97,6 @@ func CreateInstallMizuResources(ctx context.Context, kubernetesProvider *kuberne
Namespace: mizuResourcesNamespace,
PodName: kubernetes.ApiServerPodName,
PodImage: agentImage,
BasenineImage: basenineImage,
KratosImage: kratosImage,
KetoImage: ketoImage,
ServiceAccountName: serviceAccountName,

View File

@@ -53,6 +53,7 @@ func ReportTapTelemetry(apiProvider *apiserver.Provider, args interface{}, start
"args": string(argsBytes),
"executionTimeInSeconds": int(time.Since(startTime).Seconds()),
"apiCallsCount": generalStats["EntriesCount"],
"trafficVolumeInGB": generalStats["EntriesVolumeInGB"],
}
if err := sendTelemetry(argsMap); err != nil {

View File

@@ -23,9 +23,5 @@ func IsTokenValid(tokenString string, envName string) bool {
}
defer response.Body.Close()
if response.StatusCode != http.StatusOK {
return false
}
return true
return response.StatusCode == http.StatusOK
}

View File

@@ -1,69 +0,0 @@
# creates image in which mizu agent is remotely debuggable using delve
FROM node:16-slim AS site-build
WORKDIR /app/ui-build
COPY ui/package.json .
COPY ui/package-lock.json .
RUN npm i
COPY ui .
RUN npm run build
RUN npm run build-ent
FROM golang:1.16-alpine AS builder
# Set necessary environment variables needed for our image.
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
RUN apk add libpcap-dev gcc g++ make bash perl-utils
# Move to agent working directory (/agent-build).
WORKDIR /app/agent-build
COPY agent/go.mod agent/go.sum ./
COPY shared/go.mod shared/go.mod ../shared/
COPY tap/go.mod tap/go.mod ../tap/
COPY tap/api/go.* ../tap/api/
RUN go mod download
# cheap trick to make the build faster (As long as go.mod wasn't changes)
RUN go list -f '{{.Path}}@{{.Version}}' -m all | sed 1d | grep -e 'go-cache' | xargs go get
ARG COMMIT_HASH
ARG GIT_BRANCH
ARG BUILD_TIMESTAMP
ARG SEM_VER=0.0.0
# Copy and build agent code
COPY shared ../shared
COPY tap ../tap
COPY agent .
# Include gcflags for debugging
RUN go build -gcflags="all=-N -l" -o mizuagent .
COPY devops/build_extensions_debug.sh ..
RUN cd .. && /bin/bash build_extensions_debug.sh
FROM golang:1.16-alpine
# Set necessary environment variables needed for our image.
RUN apk add bash libpcap-dev gcc g++
WORKDIR /app
# Copy binary and config files from /build to root folder of scratch container.
COPY --from=builder ["/app/agent-build/mizuagent", "."]
COPY --from=builder ["/app/agent/build/extensions", "extensions"]
COPY --from=site-build ["/app/ui-build/build", "site"]
COPY --from=site-build ["/app/ui-build/build-ent", "site-standalone"]
RUN mkdir /app/data/
# install delve
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
RUN go get github.com/go-delve/delve/cmd/dlv
ENV GIN_MODE=debug
# delve ports
EXPOSE 2345 2346
# this script runs both apiserver and passivetapper and exits either if one of them exits, preventing a scenario where the container runs without one process
ENTRYPOINT "/app/mizuagent"

View File

@@ -1,28 +0,0 @@
#!/bin/bash
set -e
GCP_PROJECT=up9-docker-hub
REPOSITORY=gcr.io/$GCP_PROJECT
SERVER_NAME=mizu
GIT_BRANCH=$(git branch | grep \* | cut -d ' ' -f2 | tr '[:upper:]' '[:lower:]')
DOCKER_REPO=$REPOSITORY/$SERVER_NAME/$GIT_BRANCH
SEM_VER=${SEM_VER=0.0.0}
DOCKER_TAGGED_BUILDS=("$DOCKER_REPO:latest" "$DOCKER_REPO:$SEM_VER")
if [ "$GIT_BRANCH" = 'develop' -o "$GIT_BRANCH" = 'master' -o "$GIT_BRANCH" = 'main' ]
then
echo "Pushing to $GIT_BRANCH is allowed only via CI"
exit 1
fi
echo "building ${DOCKER_TAGGED_BUILDS[@]}"
DOCKER_TAGS_ARGS=$(echo ${DOCKER_TAGGED_BUILDS[@]/#/-t }) # "-t FIRST_TAG -t SECOND_TAG ..."
docker build -f debug.Dockerfile $DOCKER_TAGS_ARGS --build-arg SEM_VER=${SEM_VER} --build-arg BUILD_TIMESTAMP=${BUILD_TIMESTAMP} --build-arg GIT_BRANCH=${GIT_BRANCH} --build-arg COMMIT_HASH=${COMMIT_HASH} .
for DOCKER_TAG in "${DOCKER_TAGGED_BUILDS[@]}"
do
echo pushing "$DOCKER_TAG"
docker push "$DOCKER_TAG"
done

View File

@@ -1,13 +0,0 @@
#!/bin/bash
for f in tap/extensions/*; do
if [ -d "$f" ]; then
echo Building extension: $f
extension=$(basename $f) && \
cd tap/extensions/${extension} && \
go build -buildmode=plugin -o ../${extension}.so . && \
cd ../../.. && \
mkdir -p agent/build/extensions && \
cp tap/extensions/${extension}.so agent/build/extensions
fi
done

View File

@@ -1,12 +0,0 @@
#!/bin/bash
for f in tap/extensions/*; do
if [ -d "$f" ]; then
extension=$(basename $f) && \
cd tap/extensions/${extension} && \
go build -gcflags="all=-N -l" -buildmode=plugin -o ../${extension}.so . && \
cd ../../.. && \
mkdir -p agent/build/extensions && \
cp tap/extensions/${extension}.so agent/build/extensions
fi
done

View File

@@ -0,0 +1,18 @@
FROM dockcross/linux-arm64-musl:latest AS builder-from-amd64-to-arm64v8
# Install Go
RUN curl https://go.dev/dl/go1.16.13.linux-amd64.tar.gz -Lo ./go.linux-amd64.tar.gz
RUN curl https://go.dev/dl/go1.16.13.linux-amd64.tar.gz.asc -Lo ./go.linux-amd64.tar.gz.asc
RUN curl https://dl.google.com/dl/linux/linux_signing_key.pub -Lo linux_signing_key.pub
RUN gpg --import linux_signing_key.pub && gpg --verify ./go.linux-amd64.tar.gz.asc ./go.linux-amd64.tar.gz
RUN rm -rf /usr/local/go && tar -C /usr/local -xzf go.linux-amd64.tar.gz
ENV PATH "$PATH:/usr/local/go/bin"
# Compile libpcap from source
RUN curl https://www.tcpdump.org/release/libpcap-1.10.1.tar.gz -Lo ./libpcap.tar.gz
RUN curl https://www.tcpdump.org/release/libpcap-1.10.1.tar.gz.sig -Lo ./libpcap.tar.gz.sig
RUN curl https://www.tcpdump.org/release/signing-key.asc -Lo ./signing-key.asc
RUN gpg --import signing-key.asc && gpg --verify libpcap.tar.gz.sig libpcap.tar.gz
RUN tar -xzf libpcap.tar.gz && mv ./libpcap-* ./libpcap
RUN cd ./libpcap && ./configure --host=arm && make
RUN cp /work/libpcap/libpcap.a /usr/xcc/aarch64-linux-musl-cross/lib/gcc/aarch64-linux-musl/*/

View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -e
docker build . -t up9inc/linux-arm64-musl-go-libpcap && docker push up9inc/linux-arm64-musl-go-libpcap

View File

@@ -2,6 +2,35 @@
This document summarizes main and fixes changes published in stable (aka `main`) branch of this project.
Ongoing work and development releases are under `develop` branch.
## 0.24.0
### main features
* ARM64 support -- Mizu is now available for ARM 64bit architecture
* Now you can run Mizu with `minikube` on your Apple M1 laptop or any other ARM-based hosts
* New command helps user verify Mizu deployment
* Run `mizu check` to verify Mizu was deployed successfully
* `mizu check` verifies version compatibility, resources and permissions required by Mizu
* EXPERIMENTAL: Service Map - graph of all service interactions
* Arrow direction show client to server connection
* Graph edge width reflects volume of traffic captured between the services
* to enable this experimental feature use `--set service-map=true` flag
### improvements
* Mizu container images are now served from [Docker Hub](https://hub.docker.com/r/up9inc/mizu), as multi-architecture images (arm64, amd64)
* in Mizu GUI the filter query can now be applied by pressing CONTROL/COMMAND + ENTER
* try port-forwarding if http-proxy connection to Mizu API server is not available
### notable bug fixes
* Fixed HTTP/1.0 presentation which was shown as HTTP/1.1
* Fixed handling of long-living TCP connections, improves capturing gRPC and HTTP/2 traffic, and helps in service-mesh setups (istio, linkerd)
## 0.23.0
### notable bug fixes
* fixed errors in Redis protocol parser (better handling of Array and Bulk String message types)
## 0.22.0
### main features

View File

@@ -4,7 +4,7 @@
This document describes in details all permissions required for full and correct operation of Mizu.
## Editting permissions
## Editing permissions
During installation, Mizu creates a `ServiceAccount` and the roles it requires. No further action is required.
However, if there is a need, it is possible to make changes to Mizu permissions.

Some files were not shown because too many files have changed in this diff Show More