Compare commits

...

11 Commits

Author SHA1 Message Date
Alon Girmonsky
318b35e785 Update README and fix broken links (#1846)
* Update README with new structure and AI focus

* Update AI section: AI-Powered Root Cause Analysis with agents

* updated links

* added an image to the API context

* some fixes to the readme

* Remove TODO comments - using real images

* Fix broken MCP Registry links in mcp/README.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 15:43:04 -08:00
Volodymyr Stoiko
fecf290a25 Rename generic capture to l7 dissection specific config (#1841)
* Rename generic capture to l7 dissection specific config

* upd

* upd flags

* Create `REACT_APP_DISSECTION_ENABLED` env to set initial dissection state

---------

Co-authored-by: Serhii Ponomarenko <116438358+tiptophelmet@users.noreply.github.com>
Co-authored-by: tiptophelmet <serhii.ponomarenko.jobs@gmail.com>
2026-02-11 11:27:37 -08:00
Alon Girmonsky
a01f7bed74 Update README with new structure and AI focus (#1844)
* Update README with new structure and AI focus

* Update AI section: AI-Powered Root Cause Analysis with agents

* updated links

* added an image to the API context

* some fixes to the readme

* Remove TODO comments - using real images
2026-02-10 10:40:48 -08:00
Serhii Ponomarenko
633a17a0e0 🔧 Add REACT_APP_SCRIPTING_HIDDEN front env (#1845)
* 🔧 Add `scripting.enabled` helm value

* 🔧 Add `REACT_APP_SCRIPTING_HIDDEN` front env

* 🔧 Change `REACT_APP_SCRIPTING_HIDDEN` front env
2026-02-09 13:39:33 -08:00
Alon Girmonsky
8fac9a5ad5 Fix MCP Hub API tool call field name (#1842)
The Hub API expects 'name' field but the MCP server was sending 'tool'.
This caused all Hub-forwarded tools (list_l4_flows, get_l4_flow_summary,
list_api_calls, etc.) to fail with 'tool name is required' error.

Local tools like check_kubeshark_status were unaffected as they don't
call the Hub API.
2026-02-09 13:03:51 -08:00
Ilya Gavrilov
76c5eb6b59 Rename flow and full_flow to conn and flow (#1838)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2026-02-09 13:01:24 -08:00
Alon Girmonsky
482082ba49 Add MCP Registry support with MCPB package format (#1839)
* Add MCP Registry support with MCPB package format
- Update release workflow to create .mcpb artifacts for MCP Registry
- Update server.json to use MCPB registry type with GitHub namespace
- Use io.github.kubeshark/mcp namespace for GitHub authentication
- Add SHA256 placeholders (to be updated after first release)

* Add automated MCP Registry publishing to release workflow
- Add workflow_dispatch trigger with dry_run option for testing
- Add mcp-publish job that runs after release completes
- Generate server.json dynamically with correct version and SHA256 hashes
- Install and run mcp-publisher automatically
- Update static server.json to reference file with placeholders
- Add MCP Registry section to README
The release workflow now automatically publishes to the MCP Registry
when a new version is tagged. No manual steps required.

* Refactor: Extract MCP publishing to separate workflow
- Create mcp-publish.yml that triggers on release:published
- Simplify release.yml to focus on building and releasing
- MCP workflow has its own workflow_dispatch for testing
- Cleaner separation of concerns

* Address PR review feedback

- Update actions/checkout to v4
- Add OIDC permissions for MCP Registry authentication
- Change trigger from release:published to workflow_call
- Release workflow now calls mcp-publish after artifacts are uploaded
- Keep workflow_dispatch for manual testing

* Add mcp-publisher login step before publish
2026-02-09 10:12:41 -08:00
Serhii Ponomarenko
6ae379cbff 🔧 Hide agentic functionality prototype flags (#1840) 2026-02-06 20:33:15 -08:00
Dan Mudge
3f6c62a7e3 move the name of the data colume outside of the tap.tls if statement (#1830)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2026-02-06 11:46:54 -08:00
Alon Girmonsky
717433badb [3] Add MCP integration test framework (#1834)
* Add MCP (Model Context Protocol) server command

Implement `kubeshark mcp` command that runs an MCP server over stdio,
enabling AI assistants to query Kubeshark's network visibility data.

Features:
- MCP protocol implementation (JSON-RPC 2.0 over stdio)
- Dynamic tool discovery from Hub's /api/mcp endpoint
- Local cluster management tools (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- --url flag for direct connection to existing Kubeshark deployment
- --kubeconfig flag for proxy mode with kubectl
- --allow-destructive flag to enable start/stop operations (safe by default)
- --list-tools flag to display available tools
- --mcp-config flag to generate MCP client configuration
- 5-minute cache TTL for Hub tools/prompts
- Prompts for common analysis tasks

* Address code review comments for MCP implementation

- Add 30s timeout to HTTP client to prevent hanging requests
- Add scanner.Err() check after stdin processing loop
- Close HTTP response bodies to prevent resource leaks
- Add goroutine to wait on started process to prevent zombies
- Simplify polling loop by removing ineffective context check
- Advertise check_kubeshark_status in URL mode (was callable but hidden)
- Update documentation to clarify URL mode only disables start/stop

* Fix lint errors in mcpRunner.go

- Use type conversion instead of struct literals for hubMCPTool -> mcpTool
  and hubMCPPromptArg -> mcpPromptArg (S1016 gosimple)
- Lowercase error string to follow Go conventions (ST1005 staticcheck)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add MCP server unit tests

Comprehensive unit tests for the MCP server implementation:
- Protocol tests (initialize, tools/list, tools/call, prompts/list, prompts/get)
- Tool tests (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- Hub integration tests (tool fetching, caching, prompt handling)
- Error handling tests
- Edge case tests

* Fix MCP unit tests to use correct /tools/call endpoint

- Update all Hub tool tests to use POST /tools/call endpoint instead
  of individual paths like /workloads, /calls, /stats
- Verify arguments in POST body instead of URL query parameters
- Add newMockHubHandler helper for proper Hub endpoint mocking
- Split TestMCP_ToolsList into three tests:
  - TestMCP_ToolsList_CLIOnly: Tests without Hub backend
  - TestMCP_ToolsList_WithDestructive: Tests with destructive flag
  - TestMCP_ToolsList_WithHubBackend: Tests with mock Hub providing tools
- Fix TestMCP_FullConversation to mock Hub MCP endpoint correctly
- Rename URL encoding tests for clarity
- All tests now correctly reflect the implementation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Simplify MCP unit tests

- Remove section header comments (10 headers)
- Consolidate similar tests using table-driven patterns
- Simplify test assertions with more concise checks
- Combine edge case tests into single test function
- Reduce verbose test structures

Total reduction: 1477 → 495 lines (66%)
All 24 tests still pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add MCP integration test framework

Add integration tests that run against a real Kubernetes cluster:
- MCP protocol tests (initialize, tools/list, prompts/list)
- Cluster management tests (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- Full lifecycle test (check -> start -> check -> stop -> check)
- API tools tests (list_workloads, list_api_calls, get_api_stats)

Also includes:
- Makefile targets for running integration tests
- Test helper functions (startMCPSession, cleanupKubeshark, etc.)
- Documentation (README.md, TEMPLATE.md, ISSUE_TEMPLATE.md)

* Address review comments on integration tests

Makefile:
- Use unique temporary files (mktemp) instead of shared /tmp/integration-test.log
  to prevent race conditions when multiple test targets run concurrently
- Remove redundant test-integration-verbose target (test-integration already uses -v)
- Add cleanup (rm -f) for temporary log files

integration/mcp_test.go:
- Capture stderr from MCP server for debugging failures
- Add getStderr() method to mcpSession for accessing captured stderr
- Fix potential goroutine leak by adding return statements after t.Fatalf
- Remove t.Run subtests in TestMCP_APIToolsRequireKubeshark to clarify
  sequential execution with shared session
- Fix benchmark to use getKubesharkBinary helper for consistency
- Add Kubernetes cluster check to benchmark (graceful skip)
- Add proper error handling for pipe creation in benchmark
- Remove unnecessary bytes import workaround (now actually used for stderr)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Simplify and clean up MCP integration tests

- Remove unrelated L4 viewer files (1239 lines)
- Remove template/issue documentation files (419 lines)
- Trim README to essential content only
- Remove TEMPLATE comments from common_test.go
- Add initialize() helper to reduce test boilerplate
- Add hasKubernetesCluster() helper for benchmarks
- Simplify all test functions with consistent patterns

Total reduction: 2964 → 866 lines (71%)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 11:37:12 -08:00
Alon Girmonsky
a973d6916d [2] Add MCP server unit tests (#1833)
* Add MCP (Model Context Protocol) server command

Implement `kubeshark mcp` command that runs an MCP server over stdio,
enabling AI assistants to query Kubeshark's network visibility data.

Features:
- MCP protocol implementation (JSON-RPC 2.0 over stdio)
- Dynamic tool discovery from Hub's /api/mcp endpoint
- Local cluster management tools (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- --url flag for direct connection to existing Kubeshark deployment
- --kubeconfig flag for proxy mode with kubectl
- --allow-destructive flag to enable start/stop operations (safe by default)
- --list-tools flag to display available tools
- --mcp-config flag to generate MCP client configuration
- 5-minute cache TTL for Hub tools/prompts
- Prompts for common analysis tasks

* Address code review comments for MCP implementation

- Add 30s timeout to HTTP client to prevent hanging requests
- Add scanner.Err() check after stdin processing loop
- Close HTTP response bodies to prevent resource leaks
- Add goroutine to wait on started process to prevent zombies
- Simplify polling loop by removing ineffective context check
- Advertise check_kubeshark_status in URL mode (was callable but hidden)
- Update documentation to clarify URL mode only disables start/stop

* Fix lint errors in mcpRunner.go

- Use type conversion instead of struct literals for hubMCPTool -> mcpTool
  and hubMCPPromptArg -> mcpPromptArg (S1016 gosimple)
- Lowercase error string to follow Go conventions (ST1005 staticcheck)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add MCP server unit tests

Comprehensive unit tests for the MCP server implementation:
- Protocol tests (initialize, tools/list, tools/call, prompts/list, prompts/get)
- Tool tests (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- Hub integration tests (tool fetching, caching, prompt handling)
- Error handling tests
- Edge case tests

* Fix MCP unit tests to use correct /tools/call endpoint

- Update all Hub tool tests to use POST /tools/call endpoint instead
  of individual paths like /workloads, /calls, /stats
- Verify arguments in POST body instead of URL query parameters
- Add newMockHubHandler helper for proper Hub endpoint mocking
- Split TestMCP_ToolsList into three tests:
  - TestMCP_ToolsList_CLIOnly: Tests without Hub backend
  - TestMCP_ToolsList_WithDestructive: Tests with destructive flag
  - TestMCP_ToolsList_WithHubBackend: Tests with mock Hub providing tools
- Fix TestMCP_FullConversation to mock Hub MCP endpoint correctly
- Rename URL encoding tests for clarity
- All tests now correctly reflect the implementation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Simplify MCP unit tests

- Remove section header comments (10 headers)
- Consolidate similar tests using table-driven patterns
- Simplify test assertions with more concise checks
- Combine edge case tests into single test function
- Reduce verbose test structures

Total reduction: 1477 → 495 lines (66%)
All 24 tests still pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 11:30:50 -08:00
16 changed files with 449 additions and 182 deletions

201
.github/workflows/mcp-publish.yml vendored Normal file
View File

@@ -0,0 +1,201 @@
name: MCP Registry Publish
on:
workflow_call:
inputs:
release_tag:
description: 'Release tag to publish (e.g., v52.13.0)'
type: string
required: true
workflow_dispatch:
inputs:
dry_run:
description: 'Dry run - generate server.json but skip actual publishing'
type: boolean
default: true
release_tag:
description: 'Release tag to publish (e.g., v52.13.0)'
type: string
required: true
jobs:
mcp-publish:
name: Publish to MCP Registry
runs-on: ubuntu-latest
permissions:
id-token: write # Required for OIDC authentication with MCP Registry
contents: read # Required for checkout
steps:
- name: Check out the repo
uses: actions/checkout@v4
- name: Determine version
id: version
shell: bash
run: |
# inputs.release_tag works for both workflow_call and workflow_dispatch
VERSION="${{ inputs.release_tag }}"
echo "tag=${VERSION}" >> "$GITHUB_OUTPUT"
echo "Publishing MCP server for version: ${VERSION}"
- name: Download SHA256 files from release
shell: bash
run: |
VERSION="${{ steps.version.outputs.tag }}"
mkdir -p bin
echo "Downloading SHA256 checksums from release ${VERSION}..."
for platform in darwin_arm64 darwin_amd64 linux_arm64 linux_amd64 windows_amd64; do
url="https://github.com/kubeshark/kubeshark/releases/download/${VERSION}/kubeshark-mcp_${platform}.mcpb.sha256"
echo " Fetching ${platform}..."
if ! curl -sfL "${url}" -o "bin/kubeshark-mcp_${platform}.mcpb.sha256"; then
echo "::warning::Failed to download SHA256 for ${platform}"
fi
done
echo "Downloaded checksums:"
ls -la bin/*.sha256 2>/dev/null || echo "No SHA256 files found"
- name: Generate server.json
shell: bash
run: |
VERSION="${{ steps.version.outputs.tag }}"
CLEAN_VERSION="${VERSION#v}"
# Read SHA256 hashes
get_sha256() {
local file="bin/kubeshark-mcp_$1.mcpb.sha256"
if [ -f "$file" ]; then
awk '{print $1}' "$file"
else
echo "HASH_NOT_FOUND"
fi
}
DARWIN_ARM64_SHA256=$(get_sha256 "darwin_arm64")
DARWIN_AMD64_SHA256=$(get_sha256 "darwin_amd64")
LINUX_ARM64_SHA256=$(get_sha256 "linux_arm64")
LINUX_AMD64_SHA256=$(get_sha256 "linux_amd64")
WINDOWS_AMD64_SHA256=$(get_sha256 "windows_amd64")
echo "SHA256 hashes:"
echo " darwin_arm64: ${DARWIN_ARM64_SHA256}"
echo " darwin_amd64: ${DARWIN_AMD64_SHA256}"
echo " linux_arm64: ${LINUX_ARM64_SHA256}"
echo " linux_amd64: ${LINUX_AMD64_SHA256}"
echo " windows_amd64: ${WINDOWS_AMD64_SHA256}"
# Generate server.json using jq for proper formatting
jq -n \
--arg version "$CLEAN_VERSION" \
--arg full_version "$VERSION" \
--arg darwin_arm64_sha "$DARWIN_ARM64_SHA256" \
--arg darwin_amd64_sha "$DARWIN_AMD64_SHA256" \
--arg linux_arm64_sha "$LINUX_ARM64_SHA256" \
--arg linux_amd64_sha "$LINUX_AMD64_SHA256" \
--arg windows_amd64_sha "$WINDOWS_AMD64_SHA256" \
'{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.kubeshark/mcp",
"displayName": "Kubeshark",
"description": "Real-time Kubernetes network traffic visibility and API analysis for HTTP, gRPC, Redis, Kafka, DNS.",
"icon": "https://raw.githubusercontent.com/kubeshark/assets/refs/heads/master/logo/ico/icon.ico",
"repository": { "url": "https://github.com/kubeshark/kubeshark", "source": "github" },
"homepage": "https://kubeshark.com",
"license": "Apache-2.0",
"version": $version,
"authors": [{ "name": "Kubeshark", "url": "https://kubeshark.com" }],
"categories": ["kubernetes", "networking", "observability", "debugging", "security"],
"tags": ["kubernetes", "network", "traffic", "api", "http", "grpc", "kafka", "redis", "dns", "pcap", "wireshark", "tcpdump", "observability", "debugging", "microservices"],
"packages": [
{ "registryType": "mcpb", "identifier": ("https://github.com/kubeshark/kubeshark/releases/download/" + $full_version + "/kubeshark-mcp_darwin_arm64.mcpb"), "fileSha256": $darwin_arm64_sha, "transport": { "type": "stdio" } },
{ "registryType": "mcpb", "identifier": ("https://github.com/kubeshark/kubeshark/releases/download/" + $full_version + "/kubeshark-mcp_darwin_amd64.mcpb"), "fileSha256": $darwin_amd64_sha, "transport": { "type": "stdio" } },
{ "registryType": "mcpb", "identifier": ("https://github.com/kubeshark/kubeshark/releases/download/" + $full_version + "/kubeshark-mcp_linux_arm64.mcpb"), "fileSha256": $linux_arm64_sha, "transport": { "type": "stdio" } },
{ "registryType": "mcpb", "identifier": ("https://github.com/kubeshark/kubeshark/releases/download/" + $full_version + "/kubeshark-mcp_linux_amd64.mcpb"), "fileSha256": $linux_amd64_sha, "transport": { "type": "stdio" } },
{ "registryType": "mcpb", "identifier": ("https://github.com/kubeshark/kubeshark/releases/download/" + $full_version + "/kubeshark-mcp_windows_amd64.mcpb"), "fileSha256": $windows_amd64_sha, "transport": { "type": "stdio" } }
],
"tools": [
{ "name": "check_kubeshark_status", "description": "Check if Kubeshark is currently running in the cluster.", "mode": "proxy" },
{ "name": "start_kubeshark", "description": "Deploy Kubeshark to the Kubernetes cluster. Requires --allow-destructive flag.", "mode": "proxy", "destructive": true },
{ "name": "stop_kubeshark", "description": "Remove Kubeshark from the Kubernetes cluster. Requires --allow-destructive flag.", "mode": "proxy", "destructive": true },
{ "name": "list_workloads", "description": "List pods, services, namespaces, and nodes with observed L7 traffic.", "mode": "all" },
{ "name": "list_api_calls", "description": "Query L7 API transactions (HTTP, gRPC, Redis, Kafka, DNS) with KFL filtering.", "mode": "all" },
{ "name": "get_api_call", "description": "Get detailed information about a specific API call including headers and body.", "mode": "all" },
{ "name": "get_api_stats", "description": "Get aggregated API statistics and metrics.", "mode": "all" },
{ "name": "list_l4_flows", "description": "List L4 (TCP/UDP) network flows with traffic statistics.", "mode": "all" },
{ "name": "get_l4_flow_summary", "description": "Get L4 connectivity summary including top talkers and cross-namespace traffic.", "mode": "all" },
{ "name": "list_snapshots", "description": "List all PCAP snapshots.", "mode": "all" },
{ "name": "create_snapshot", "description": "Create a new PCAP snapshot of captured traffic.", "mode": "all" },
{ "name": "get_dissection_status", "description": "Check L7 protocol parsing status.", "mode": "all" },
{ "name": "enable_dissection", "description": "Enable L7 protocol dissection.", "mode": "all" },
{ "name": "disable_dissection", "description": "Disable L7 protocol dissection.", "mode": "all" }
],
"prompts": [
{ "name": "analyze_traffic", "description": "Analyze API traffic patterns and identify issues" },
{ "name": "find_errors", "description": "Find and summarize API errors and failures" },
{ "name": "trace_request", "description": "Trace a request path through microservices" },
{ "name": "show_topology", "description": "Show service communication topology" },
{ "name": "latency_analysis", "description": "Analyze latency patterns and identify slow endpoints" },
{ "name": "security_audit", "description": "Audit traffic for security concerns" },
{ "name": "compare_traffic", "description": "Compare traffic patterns between time periods" },
{ "name": "debug_connection", "description": "Debug connectivity issues between services" }
],
"configuration": {
"properties": {
"url": { "type": "string", "description": "Direct URL to Kubeshark Hub (e.g., https://kubeshark.example.com). When set, connects directly without kubectl/proxy.", "examples": ["https://kubeshark.example.com", "http://localhost:8899"] },
"kubeconfig": { "type": "string", "description": "Path to kubeconfig file for proxy mode.", "examples": ["~/.kube/config", "/path/to/.kube/config"] },
"allow-destructive": { "type": "boolean", "description": "Enable destructive operations (start_kubeshark, stop_kubeshark). Default: false for safety.", "default": false }
}
},
"modes": {
"url": { "description": "Connect directly to an existing Kubeshark deployment via URL. Cluster management tools are disabled.", "args": ["mcp", "--url", "${url}"] },
"proxy": { "description": "Connect via kubectl port-forward. Requires kubeconfig access to the cluster.", "args": ["mcp", "--kubeconfig", "${kubeconfig}"] },
"proxy-destructive": { "description": "Proxy mode with destructive operations enabled.", "args": ["mcp", "--kubeconfig", "${kubeconfig}", "--allow-destructive"] }
}
}' > mcp/server.json
echo ""
echo "Generated server.json:"
cat mcp/server.json
- name: Install mcp-publisher
shell: bash
run: |
echo "Installing mcp-publisher..."
curl -sfL "https://github.com/modelcontextprotocol/registry/releases/latest/download/mcp-publisher_linux_amd64.tar.gz" | tar xz
chmod +x mcp-publisher
sudo mv mcp-publisher /usr/local/bin/
echo "mcp-publisher installed successfully"
- name: Login to MCP Registry
if: github.event_name != 'workflow_dispatch' || github.event.inputs.dry_run != 'true'
shell: bash
run: mcp-publisher login github
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Publish to MCP Registry
if: github.event_name != 'workflow_dispatch' || github.event.inputs.dry_run != 'true'
shell: bash
run: |
cd mcp
echo "Publishing to MCP Registry..."
if ! mcp-publisher publish; then
echo "::error::Failed to publish to MCP Registry"
exit 1
fi
echo "Successfully published to MCP Registry"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Dry-run summary
if: github.event_name == 'workflow_dispatch' && github.event.inputs.dry_run == 'true'
shell: bash
run: |
echo "=============================================="
echo "DRY RUN - Would publish the following server.json"
echo "=============================================="
cat mcp/server.json
echo ""
echo "=============================================="
echo "SHA256 checksums downloaded:"
echo "=============================================="
cat bin/*.sha256 2>/dev/null || echo "No SHA256 files found"

View File

@@ -43,6 +43,23 @@ jobs:
run: |
echo '${{ steps.version.outputs.tag }}' >> bin/version.txt
- name: Create MCP Registry artifacts
shell: bash
run: |
cd bin
# Create .mcpb copies for MCP Registry (URL must contain "mcp")
for f in kubeshark_linux_amd64 kubeshark_linux_arm64 kubeshark_darwin_amd64 kubeshark_darwin_arm64; do
if [ -f "$f" ]; then
cp "$f" "${f/kubeshark_/kubeshark-mcp_}.mcpb"
shasum -a 256 "${f/kubeshark_/kubeshark-mcp_}.mcpb" > "${f/kubeshark_/kubeshark-mcp_}.mcpb.sha256"
fi
done
# Handle Windows executable
if [ -f "kubeshark.exe" ]; then
cp kubeshark.exe kubeshark-mcp_windows_amd64.mcpb
shasum -a 256 kubeshark-mcp_windows_amd64.mcpb > kubeshark-mcp_windows_amd64.mcpb.sha256
fi
- name: Release
uses: ncipollo/release-action@v1
with:
@@ -50,4 +67,11 @@ jobs:
artifacts: "bin/*"
tag: ${{ steps.version.outputs.tag }}
prerelease: false
bodyFile: 'bin/README.md'
bodyFile: 'bin/README.md'
mcp-publish:
name: Publish to MCP Registry
needs: [release]
uses: ./.github/workflows/mcp-publish.yml
with:
release_tag: ${{ needs.release.outputs.version }}

180
README.md
View File

@@ -1,98 +1,132 @@
<p align="center">
<img src="https://raw.githubusercontent.com/kubeshark/assets/master/svg/kubeshark-logo.svg" alt="Kubeshark: Traffic analyzer for Kubernetes." height="128px"/>
<img src="https://raw.githubusercontent.com/kubeshark/assets/master/svg/kubeshark-logo.svg" alt="Kubeshark" height="120px"/>
</p>
<p align="center">
<a href="https://github.com/kubeshark/kubeshark/releases/latest">
<img alt="GitHub Latest Release" src="https://img.shields.io/github/v/release/kubeshark/kubeshark?logo=GitHub&style=flat-square">
</a>
<a href="https://hub.docker.com/r/kubeshark/worker">
<img alt="Docker pulls" src="https://img.shields.io/docker/pulls/kubeshark/worker?color=%23099cec&logo=Docker&style=flat-square">
</a>
<a href="https://hub.docker.com/r/kubeshark/worker">
<img alt="Image size" src="https://img.shields.io/docker/image-size/kubeshark/kubeshark/latest?logo=Docker&style=flat-square">
</a>
<a href="https://discord.gg/WkvRGMUcx7">
<img alt="Discord" src="https://img.shields.io/discord/1042559155224973352?logo=Discord&style=flat-square&label=discord">
</a>
<a href="https://join.slack.com/t/kubeshark/shared_invite/zt-3jdcdgxdv-1qNkhBh9c6CFoE7bSPkpBQ">
<img alt="Slack" src="https://img.shields.io/badge/slack-join_chat-green?logo=Slack&style=flat-square&label=slack">
</a>
<a href="https://github.com/kubeshark/kubeshark/releases/latest"><img alt="Release" src="https://img.shields.io/github/v/release/kubeshark/kubeshark?logo=GitHub&style=flat-square"></a>
<a href="https://hub.docker.com/r/kubeshark/worker"><img alt="Docker pulls" src="https://img.shields.io/docker/pulls/kubeshark/worker?color=%23099cec&logo=Docker&style=flat-square"></a>
<a href="https://discord.gg/WkvRGMUcx7"><img alt="Discord" src="https://img.shields.io/discord/1042559155224973352?logo=Discord&style=flat-square&label=discord"></a>
<a href="https://join.slack.com/t/kubeshark/shared_invite/zt-3jdcdgxdv-1qNkhBh9c6CFoE7bSPkpBQ"><img alt="Slack" src="https://img.shields.io/badge/slack-join_chat-green?logo=Slack&style=flat-square"></a>
</p>
<p align="center"><b>Network Intelligence for Kubernetes</b></p>
<p align="center">
<b>
Want to see Kubeshark in action right now? Visit this
<a href="https://demo.kubeshark.com/">live demo deployment</a> of Kubeshark.
</b>
<a href="https://demo.kubeshark.com/">Live Demo</a> · <a href="https://docs.kubeshark.com">Docs</a>
</p>
**Kubeshark** is an API traffic analyzer for Kubernetes, providing deep packet inspection with complete API and Kubernetes contexts, retaining cluster-wide L4 traffic (PCAP), and using minimal production compute resources.
---
![Simple UI](https://github.com/kubeshark/assets/raw/master/png/kubeshark-ui.png)
* **Cluster-wide, real-time visibility into every packet, API call, and service interaction.**
* Replay any moment in time.
* Resolve incidents at the speed of LLMs. 100% on-premises.
Think [TCPDump](https://en.wikipedia.org/wiki/Tcpdump) and [Wireshark](https://www.wireshark.org/) reimagined for Kubernetes.
![Kubeshark](https://github.com/kubeshark/assets/raw/master/png/stream.png)
Access cluster-wide PCAP traffic by pressing a single button, without the need to install `tcpdump` or manually copy files. Understand the traffic context in relation to the API and Kubernetes contexts.
---
#### Service-Map w/Kubernetes Context
## Get Started
![Service Map with Kubernetes Context](https://github.com/kubeshark/assets/raw/master/png/kubeshark-servicemap.png)
#### Export Cluster-Wide L4 Traffic (PCAP)
Imagine having a cluster-wide [TCPDump](https://www.tcpdump.org/)-like capability—exporting a single [PCAP](https://www.ietf.org/archive/id/draft-gharris-opsawg-pcap-01.html) file that consolidates traffic from multiple nodes, all accessible with a single click.
1. Go to the **Snapshots** tab
2. Create a new snapshot
3. **Optionally** select the nodes (default: all nodes)
4. **Optionally** select the time frame (default: last one hour)
5. Press **Create**
<img width="3342" height="1206" alt="image" src="https://github.com/user-attachments/assets/e8e47996-52b7-4028-9698-f059a13ffdb7" />
Once the snapshot is ready, click the PCAP file to export its contents and open it in Wireshark.
## Getting Started
Download **Kubeshark**'s binary distribution [latest release](https://github.com/kubeshark/kubeshark/releases/latest) or use one of the following methods to deploy **Kubeshark**. The [web-based dashboard](https://docs.kubeshark.com/en/ui) should open in your browser, showing a real-time view of your cluster's traffic.
### Homebrew
[Homebrew](https://brew.sh/) :beer: users can install the Kubeshark CLI with:
```shell
brew install kubeshark
kubeshark tap
```
To clean up:
```shell
kubeshark clean
```
### Helm
Add the Helm repository and install the chart:
```shell
```bash
helm repo add kubeshark https://helm.kubeshark.com
helm install kubeshark kubeshark/kubeshark
```
Follow the on-screen instructions how to connect to the dashboard.
To clean up:
```shell
helm uninstall kubeshark
Dashboard opens automatically. You're capturing traffic.
**With AI** — connect your assistant and debug with natural language:
```bash
brew install kubeshark
claude mcp add kubeshark -- kubeshark mcp
```
## Building From Source
> *"Why did checkout fail at 2:15 PM?"*
> *"Which services have error rates above 1%?"*
Clone this repository and run the `make` command to build it. After the build is complete, the executable can be found at `./bin/kubeshark`.
[MCP setup guide →](https://docs.kubeshark.com/en/mcp)
## Documentation
---
To learn more, read the [documentation](https://docs.kubeshark.com).
## Why Kubeshark
- **Instant root cause** — trace requests across services, see exact errors
- **Zero instrumentation** — no code changes, no SDKs, just deploy
- **Full payload capture** — request/response bodies, headers, timing
- **TLS decryption** — see encrypted traffic without managing keys
- **AI-ready** — query traffic with natural language via MCP
---
### Traffic Analysis and API Dissection
Capture and inspect every API call across your cluster—HTTP, gRPC, Redis, Kafka, DNS, and more. Request/response matching with full payloads, parsed according to protocol specifications. Headers, timing, and complete context. Zero instrumentation required.
![API context](https://github.com/kubeshark/assets/raw/master/png/api_context.png)
[Learn more →](https://docs.kubeshark.com/en/v2/l7_api_dissection)
### L4/L7 Workload Map
Visualize how your services communicate. See dependencies, traffic flow, and identify anomalies at a glance.
![Service Map](https://github.com/kubeshark/assets/raw/master/png/servicemap.png)
[Learn more →](https://docs.kubeshark.com/en/v2/service_map)
### AI-Powered Root Cause Analysis
Resolve production issues in minutes instead of hours. Connect your AI assistant and investigate incidents using natural language. Build network-aware AI agents for forensics, monitoring, compliance, and security.
> *"Why did checkout fail at 2:15 PM?"*
> *"Which services have error rates above 1%?"*
> *"Trace request abc123 through all services"*
Works with Claude Code, Cursor, and any MCP-compatible AI.
[MCP setup guide →](https://docs.kubeshark.com/en/mcp)
### Traffic Retention
Retain every packet. Take snapshots. Export PCAP files. Replay any moment in time.
![Traffic Retention](https://github.com/kubeshark/assets/raw/master/png/snapshots.png)
[Snapshots guide →](https://docs.kubeshark.com/en/v2/traffic_snapshots)
---
## Features
| Feature | Description |
|---------|-------------|
| [**Raw Capture**](https://docs.kubeshark.com/en/v2/raw_capture) | Continuous cluster-wide packet capture with minimal overhead |
| [**Traffic Snapshots**](https://docs.kubeshark.com/en/v2/traffic_snapshots) | Point-in-time snapshots, export as PCAP for Wireshark |
| [**L7 API Dissection**](https://docs.kubeshark.com/en/v2/l7_api_dissection) | Request/response matching with full payloads and protocol parsing |
| [**Protocol Support**](https://docs.kubeshark.com/en/protocols) | HTTP, gRPC, GraphQL, Redis, Kafka, DNS, and more |
| [**TLS Decryption**](https://docs.kubeshark.com/en/encrypted_traffic) | eBPF-based decryption without key management |
| [**AI-Powered Analysis**](https://docs.kubeshark.com/en/v2/ai_powered_analysis) | Query traffic with Claude, Cursor, or any MCP-compatible AI |
| [**Display Filters**](https://docs.kubeshark.com/en/v2/kfl2) | Wireshark-inspired display filters for precise traffic analysis |
| [**100% On-Premises**](https://docs.kubeshark.com/en/air_gapped) | Air-gapped support, no external dependencies |
---
## Install
| Method | Command |
|--------|---------|
| Helm | `helm repo add kubeshark https://helm.kubeshark.com && helm install kubeshark kubeshark/kubeshark` |
| Homebrew | `brew install kubeshark && kubeshark tap` |
| Binary | [Download](https://github.com/kubeshark/kubeshark/releases/latest) |
[Installation guide →](https://docs.kubeshark.com/en/install)
---
## Contributing
We :heart: pull requests! See [CONTRIBUTING.md](CONTRIBUTING.md) for the contribution guide.
We welcome contributions. See [CONTRIBUTING.md](CONTRIBUTING.md).
## License
[Apache-2.0](LICENSE)

View File

@@ -671,7 +671,7 @@ func (s *mcpServer) callHubTool(toolName string, args map[string]any) (string, b
// Build the request body
requestBody := map[string]any{
"tool": toolName,
"name": toolName,
"arguments": args,
}

View File

@@ -116,6 +116,7 @@ func CreateDefaultConfig() ConfigStruct {
},
CanUpdateTargetedPods: true,
CanStopTrafficCapturing: true,
CanControlDissection: true,
ShowAdminConsoleLink: true,
},
},
@@ -139,8 +140,8 @@ func CreateDefaultConfig() ConfigStruct {
"diameter",
"udp-flow",
"tcp-flow",
"udp-flow-full",
"tcp-flow-full",
"udp-conn",
"tcp-conn",
},
PortMapping: configStructs.PortMapping{
HTTP: []uint16{80, 443, 8080},
@@ -154,8 +155,10 @@ func CreateDefaultConfig() ConfigStruct {
CompleteStreamingEnabled: true,
},
Capture: configStructs.CaptureConfig{
Stopped: false,
StopAfter: "5m",
Dissection: configStructs.DissectionConfig{
Enabled: true,
StopAfter: "5m",
},
},
},
}
@@ -181,7 +184,6 @@ type ConfigStruct struct {
License string `yaml:"license" json:"license" default:""`
CloudApiUrl string `yaml:"cloudApiUrl" json:"cloudApiUrl" default:"https://api.kubeshark.com"`
CloudLicenseEnabled bool `yaml:"cloudLicenseEnabled" json:"cloudLicenseEnabled" default:"true"`
AiAssistantEnabled bool `yaml:"aiAssistantEnabled" json:"aiAssistantEnabled" default:"true"`
DemoModeEnabled bool `yaml:"demoModeEnabled" json:"demoModeEnabled" default:"false"`
SupportChatEnabled bool `yaml:"supportChatEnabled" json:"supportChatEnabled" default:"false"`
BetaEnabled bool `yaml:"betaEnabled" json:"betaEnabled" default:"false"`

View File

@@ -12,6 +12,7 @@ import (
)
type ScriptingConfig struct {
Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
Env map[string]interface{} `yaml:"env" json:"env" default:"{}"`
Source string `yaml:"source" json:"source" default:""`
Sources []string `yaml:"sources" json:"sources" default:"[]"`

View File

@@ -167,6 +167,7 @@ type Role struct {
ScriptingPermissions ScriptingPermissions `yaml:"scriptingPermissions" json:"scriptingPermissions"`
CanUpdateTargetedPods bool `yaml:"canUpdateTargetedPods" json:"canUpdateTargetedPods" default:"false"`
CanStopTrafficCapturing bool `yaml:"canStopTrafficCapturing" json:"canStopTrafficCapturing" default:"false"`
CanControlDissection bool `yaml:"canControlDissection" json:"canControlDissection" default:"false"`
ShowAdminConsoleLink bool `yaml:"showAdminConsoleLink" json:"showAdminConsoleLink" default:"false"`
}
@@ -316,9 +317,13 @@ type DelayedDissectionConfig struct {
Memory string `yaml:"memory" json:"memory" default:"4Gi"`
}
type DissectionConfig struct {
Enabled bool `yaml:"enabled" json:"enabled" default:"true"`
StopAfter string `yaml:"stopAfter" json:"stopAfter" default:"5m"`
}
type CaptureConfig struct {
Stopped bool `yaml:"stopped" json:"stopped" default:"false"`
StopAfter string `yaml:"stopAfter" json:"stopAfter" default:"5m"`
Dissection DissectionConfig `yaml:"dissection" json:"dissection"`
CaptureSelf bool `yaml:"captureSelf" json:"captureSelf" default:"false"`
Raw RawCaptureConfig `yaml:"raw" json:"raw"`
DbMaxSize string `yaml:"dbMaxSize" json:"dbMaxSize" default:"500Mi"`

View File

@@ -138,8 +138,8 @@ Example for overriding image names:
| `tap.namespaces` | Target pods in namespaces | `[]` |
| `tap.excludedNamespaces` | Exclude pods in namespaces | `[]` |
| `tap.bpfOverride` | When using AF_PACKET as a traffic capture backend, override any existing pod targeting rules and set explicit BPF expression (e.g. `net 0.0.0.0/0`). | `[]` |
| `tap.capture.stopped` | Set to `false` to have traffic processing start automatically. When set to `true`, traffic processing is stopped by default, resulting in almost no resource consumption (e.g. Kubeshark is dormant). This property can be dynamically control via the dashboard. | `false` |
| `tap.capture.stopAfter` | Set to a duration (e.g. `30s`) to have traffic processing stop after no websocket activity between worker and hub. | `30s` |
| `tap.capture.dissection.enabled` | Set to `true` to have L7 protocol dissection start automatically. When set to `false`, dissection is disabled by default. This property can be dynamically controlled via the dashboard. | `true` |
| `tap.capture.dissection.stopAfter` | Set to a duration (e.g. `30s`) to have L7 dissection stop after no activity. | `5m` |
| `tap.capture.raw.enabled` | Enable raw capture of packets and syscalls to disk for offline analysis | `true` |
| `tap.capture.raw.storageSize` | Maximum storage size for raw capture files (supports K8s quantity format: `1Gi`, `500Mi`, etc.) | `1Gi` |
| `tap.capture.dbMaxSize` | Maximum size for capture database (e.g., `4Gi`, `2000Mi`). When empty, automatically uses 80% of allocated storage (`tap.storageLimit`). | `""` |
@@ -198,7 +198,7 @@ Example for overriding image names:
| `tap.auth.saml.x509crt` | A self-signed X.509 `.cert` contents <br/>(effective, if `tap.auth.type = saml`) | `` |
| `tap.auth.saml.x509key` | A self-signed X.509 `.key` contents <br/>(effective, if `tap.auth.type = saml`) | `` |
| `tap.auth.saml.roleAttribute` | A SAML attribute name corresponding to user's authorization role <br/>(effective, if `tap.auth.type = saml`) | `role` |
| `tap.auth.saml.roles` | A list of SAML authorization roles and their permissions <br/>(effective, if `tap.auth.type = saml`) | `{"admin":{"canDownloadPCAP":true,"canUpdateTargetedPods":true,"canUseScripting":true, "scriptingPermissions":{"canSave":true, "canActivate":true, "canDelete":true}, "canStopTrafficCapturing":true, "filter":"","showAdminConsoleLink":true}}` |
| `tap.auth.saml.roles` | A list of SAML authorization roles and their permissions <br/>(effective, if `tap.auth.type = saml`) | `{"admin":{"canDownloadPCAP":true,"canUpdateTargetedPods":true,"canUseScripting":true, "scriptingPermissions":{"canSave":true, "canActivate":true, "canDelete":true}, "canStopTrafficCapturing":true, "canControlDissection":true, "filter":"","showAdminConsoleLink":true}}` |
| `tap.ingress.enabled` | Enable `Ingress` | `false` |
| `tap.ingress.className` | Ingress class name | `""` |
| `tap.ingress.host` | Host of the `Ingress` | `ks.svc.cluster.local` |
@@ -229,6 +229,7 @@ Example for overriding image names:
| `dumpLogs` | Enable dumping of logs | `false` |
| `headless` | Enable running in headless mode | `false` |
| `license` | License key for the Pro/Enterprise edition | `""` |
| `scripting.enabled` | Enables scripting | `false` |
| `scripting.env` | Environment variables for the scripting | `{}` |
| `scripting.source` | Source directory of the scripts | `""` |
| `scripting.watchScripts` | Enable watch mode for the scripts in source directory | `true` |

View File

@@ -37,7 +37,7 @@ spec:
- -loglevel
- '{{ .Values.logLevel | default "warning" }}'
- -capture-stop-after
- "{{ if hasKey .Values.tap.capture "stopAfter" }}{{ .Values.tap.capture.stopAfter }}{{ else }}5m{{ end }}"
- "{{ if hasKey .Values.tap.capture.dissection "stopAfter" }}{{ .Values.tap.capture.dissection.stopAfter }}{{ else }}5m{{ end }}"
- -snapshot-size-limit
- '{{ .Values.tap.snapshots.storageSize }}'
{{- if .Values.tap.delayedDissection.image }}

View File

@@ -48,6 +48,12 @@ spec:
value: '{{ not (eq .Values.tap.auth.saml.idpMetadataUrl "") | ternary .Values.tap.auth.saml.idpMetadataUrl " " }}'
- name: REACT_APP_TIMEZONE
value: '{{ not (eq .Values.timezone "") | ternary .Values.timezone " " }}'
- name: REACT_APP_SCRIPTING_HIDDEN
value: '{{- if and .Values.scripting (eq (.Values.scripting.enabled | toString) "false") -}}
true
{{- else -}}
false
{{- end }}'
- name: REACT_APP_SCRIPTING_DISABLED
value: '{{- if .Values.tap.liveConfigMapChangesDisabled -}}
{{- if .Values.demoModeEnabled -}}
@@ -66,11 +72,13 @@ spec:
value: '{{ eq .Values.tap.packetCapture "af_packet" | ternary "false" "true" }}'
- name: REACT_APP_RECORDING_DISABLED
value: '{{ .Values.tap.liveConfigMapChangesDisabled }}'
- name: REACT_APP_STOP_TRAFFIC_CAPTURING_DISABLED
value: '{{- if and .Values.tap.liveConfigMapChangesDisabled .Values.tap.capture.stopped -}}
false
- name: REACT_APP_DISSECTION_ENABLED
value: '{{ .Values.tap.capture.dissection.enabled | ternary "true" "false" }}'
- name: REACT_APP_DISSECTION_CONTROL_ENABLED
value: '{{- if and .Values.tap.liveConfigMapChangesDisabled (not .Values.tap.capture.dissection.enabled) -}}
true
{{- else -}}
{{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "false" }}
{{ not .Values.tap.liveConfigMapChangesDisabled | ternary "true" "false" }}
{{- end -}}'
- name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
value: '{{- if or (and .Values.cloudLicenseEnabled (not (empty .Values.license))) (not .Values.internetConnectivity) -}}
@@ -78,8 +86,6 @@ spec:
{{- else -}}
{{ .Values.cloudLicenseEnabled }}
{{- end }}'
- name: 'REACT_APP_AI_ASSISTANT_ENABLED'
value: '{{ .Values.aiAssistantEnabled | ternary "true" "false" }}'
- name: REACT_APP_SUPPORT_CHAT_ENABLED
value: '{{ and .Values.supportChatEnabled .Values.internetConnectivity | ternary "true" "false" }}'
- name: REACT_APP_BETA_ENABLED

View File

@@ -402,8 +402,8 @@ spec:
- hostPath:
path: /
name: root
- name: data
{{- end }}
- name: data
{{- if .Values.tap.persistentStorage }}
persistentVolumeClaim:
claimName: kubeshark-persistent-volume-claim

View File

@@ -11,7 +11,7 @@ data:
NAMESPACES: '{{ gt (len .Values.tap.namespaces) 0 | ternary (join "," .Values.tap.namespaces) "" }}'
EXCLUDED_NAMESPACES: '{{ gt (len .Values.tap.excludedNamespaces) 0 | ternary (join "," .Values.tap.excludedNamespaces) "" }}'
BPF_OVERRIDE: '{{ .Values.tap.bpfOverride }}'
STOPPED: '{{ .Values.tap.capture.stopped | ternary "true" "false" }}'
DISSECTION_ENABLED: '{{ .Values.tap.capture.dissection.enabled | ternary "true" "false" }}'
CAPTURE_SELF: '{{ .Values.tap.capture.captureSelf | ternary "true" "false" }}'
SCRIPTING_SCRIPTS: '{}'
SCRIPTING_ACTIVE_SCRIPTS: '{{ gt (len .Values.scripting.active) 0 | ternary (join "," .Values.scripting.active) "" }}'
@@ -56,11 +56,11 @@ data:
TARGETED_PODS_UPDATE_DISABLED: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "" }}'
PRESET_FILTERS_CHANGING_ENABLED: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "false" "true" }}'
RECORDING_DISABLED: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "" }}'
STOP_TRAFFIC_CAPTURING_DISABLED: '{{- if and .Values.tap.liveConfigMapChangesDisabled .Values.tap.capture.stopped -}}
false
{{- else -}}
{{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "false" }}
{{- end }}'
DISSECTION_CONTROL_ENABLED: '{{- if and .Values.tap.liveConfigMapChangesDisabled (not .Values.tap.capture.dissection.enabled) -}}
true
{{- else -}}
{{ not .Values.tap.liveConfigMapChangesDisabled | ternary "true" "false" }}
{{- end }}'
GLOBAL_FILTER: {{ include "kubeshark.escapeDoubleQuotes" .Values.tap.globalFilter | quote }}
DEFAULT_FILTER: {{ include "kubeshark.escapeDoubleQuotes" .Values.tap.defaultFilter | quote }}
TRAFFIC_SAMPLE_RATE: '{{ .Values.tap.misc.trafficSampleRate }}'
@@ -73,7 +73,6 @@ data:
{{- else -}}
{{ .Values.cloudLicenseEnabled }}
{{- end }}'
AI_ASSISTANT_ENABLED: '{{ .Values.aiAssistantEnabled | ternary "true" "false" }}'
DUPLICATE_TIMEFRAME: '{{ .Values.tap.misc.duplicateTimeframe }}'
ENABLED_DISSECTORS: '{{ gt (len .Values.tap.enabledDissectors) 0 | ternary (join "," .Values.tap.enabledDissectors) "" }}'
CUSTOM_MACROS: '{{ toJson .Values.tap.customMacros }}'
@@ -84,5 +83,5 @@ data:
PCAP_MAX_TIME: '{{ .Values.pcapdump.maxTime }}'
PCAP_MAX_SIZE: '{{ .Values.pcapdump.maxSize }}'
PORT_MAPPING: '{{ toJson .Values.tap.portMapping }}'
RAW_CAPTURE: '{{ .Values.tap.capture.raw.enabled | ternary "true" "false" }}'
RAW_CAPTURE_ENABLED: '{{ .Values.tap.capture.raw.enabled | ternary "true" "false" }}'
RAW_CAPTURE_STORAGE_SIZE: '{{ .Values.tap.capture.raw.storageSize }}'

View File

@@ -26,8 +26,9 @@ tap:
excludedNamespaces: []
bpfOverride: ""
capture:
stopped: false
stopAfter: 5m
dissection:
enabled: true
stopAfter: 5m
captureSelf: false
raw:
enabled: true
@@ -145,6 +146,7 @@ tap:
canDelete: true
canUpdateTargetedPods: true
canStopTrafficCapturing: true
canControlDissection: true
showAdminConsoleLink: true
ingress:
enabled: false
@@ -189,8 +191,8 @@ tap:
- diameter
- udp-flow
- tcp-flow
- tcp-flow-full
- udp-flow-full
- tcp-conn
- udp-conn
portMapping:
http:
- 80
@@ -270,12 +272,12 @@ headless: false
license: ""
cloudApiUrl: "https://api.kubeshark.com"
cloudLicenseEnabled: true
aiAssistantEnabled: true
demoModeEnabled: false
supportChatEnabled: false
betaEnabled: false
internetConnectivity: true
scripting:
enabled: false
env: {}
source: ""
sources: []

View File

@@ -256,7 +256,7 @@ data:
NAMESPACES: ''
EXCLUDED_NAMESPACES: ''
BPF_OVERRIDE: ''
STOPPED: 'false'
DISSECTION_ENABLED: 'true'
SCRIPTING_SCRIPTS: '{}'
SCRIPTING_ACTIVE_SCRIPTS: ''
INGRESS_ENABLED: 'false'
@@ -276,7 +276,7 @@ data:
TARGETED_PODS_UPDATE_DISABLED: ''
PRESET_FILTERS_CHANGING_ENABLED: 'true'
RECORDING_DISABLED: ''
STOP_TRAFFIC_CAPTURING_DISABLED: 'false'
DISSECTION_CONTROL_ENABLED: 'true'
GLOBAL_FILTER: ""
DEFAULT_FILTER: ""
TRAFFIC_SAMPLE_RATE: '100'
@@ -287,7 +287,7 @@ data:
CLOUD_LICENSE_ENABLED: 'true'
AI_ASSISTANT_ENABLED: 'true'
DUPLICATE_TIMEFRAME: '200ms'
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,ws,ldap,radius,diameter,udp-flow,tcp-flow'
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,ws,ldap,radius,diameter,udp-flow,tcp-flow,udp-conn,tcp-conn'
CUSTOM_MACROS: '{"https":"tls and (http or http2)"}'
DISSECTORS_UPDATING_ENABLED: 'true'
DETECT_DUPLICATES: 'false'
@@ -296,7 +296,7 @@ data:
PCAP_MAX_TIME: '1h'
PCAP_MAX_SIZE: '500MB'
PORT_MAPPING: '{"amqp":[5671,5672],"diameter":[3868],"http":[80,443,8080],"kafka":[9092],"ldap":[389],"redis":[6379]}'
RAW_CAPTURE: 'true'
RAW_CAPTURE_ENABLED: 'true'
RAW_CAPTURE_STORAGE_SIZE: '1Gi'
---
# Source: kubeshark/templates/02-cluster-role.yaml
@@ -953,8 +953,8 @@ spec:
value: 'true'
- name: REACT_APP_RECORDING_DISABLED
value: 'false'
- name: REACT_APP_STOP_TRAFFIC_CAPTURING_DISABLED
value: 'false'
- name: REACT_APP_DISSECTION_CONTROL_ENABLED
value: 'true'
- name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
value: 'true'
- name: 'REACT_APP_AI_ASSISTANT_ENABLED'

View File

@@ -59,6 +59,18 @@ Add to your Claude Desktop configuration:
}
}
```
or:
```json
{
"mcpServers": {
"kubeshark": {
"command": "kubeshark",
"args": ["mcp"]
}
}
}
```
#### With Destructive Operations
@@ -174,11 +186,18 @@ src.pod.name == "frontend-.*"
http and src.namespace == "default" and response.status == 500
```
## MCP Registry
Kubeshark is published to the [MCP Registry](https://registry.modelcontextprotocol.io/) automatically on each release.
The `server.json` in this directory is a reference file. The actual registry metadata (version, SHA256 hashes) is auto-generated during the release workflow. See [`.github/workflows/release.yml`](../.github/workflows/release.yml) for details.
## Links
- [Documentation](https://docs.kubeshark.com/en/mcp)
- [GitHub](https://github.com/kubeshark/kubeshark)
- [Website](https://kubeshark.com)
- [MCP Registry](https://registry.modelcontextprotocol.io/)
## License

View File

@@ -1,16 +1,16 @@
{
"$schema": "https://registry.modelcontextprotocol.io/schemas/server.schema.json",
"name": "com.kubeshark/mcp",
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.kubeshark/mcp",
"displayName": "Kubeshark",
"description": "Real-time Kubernetes network traffic visibility and API analysis. Query L7 API transactions (HTTP, gRPC, Redis, Kafka, DNS), L4 network flows, and manage Kubeshark deployments directly from AI assistants.",
"icon": "https://kubeshark.com/favicon.ico",
"description": "Real-time Kubernetes network traffic visibility and API analysis for HTTP, gRPC, Redis, Kafka, DNS.",
"icon": "https://raw.githubusercontent.com/kubeshark/assets/refs/heads/master/logo/ico/icon.ico",
"repository": {
"url": "https://github.com/kubeshark/kubeshark",
"source": "github"
},
"homepage": "https://kubeshark.com",
"license": "Apache-2.0",
"version": "52.12.0",
"version": "AUTO_GENERATED_AT_RELEASE",
"authors": [
{
"name": "Kubeshark",
@@ -24,41 +24,38 @@
"debugging",
"security"
],
"tags": [
"kubernetes",
"network",
"traffic",
"api",
"http",
"grpc",
"kafka",
"redis",
"dns",
"pcap",
"wireshark",
"tcpdump",
"observability",
"debugging",
"microservices"
],
"_note": "version and packages.fileSha256 are auto-generated at release time by .github/workflows/release.yml",
"tags": ["kubernetes", "network", "traffic", "api", "http", "grpc", "kafka", "redis", "dns", "pcap", "wireshark", "tcpdump", "observability", "debugging", "microservices"],
"packages": [
{
"registryType": "github-releases",
"name": "kubeshark/kubeshark",
"version": "52.12.0",
"runtime": "binary",
"platforms": [
"darwin-arm64",
"darwin-amd64",
"linux-arm64",
"linux-amd64",
"windows-amd64"
],
"transport": {
"type": "stdio",
"command": "kubeshark",
"args": ["mcp"]
}
"registryType": "mcpb",
"identifier": "https://github.com/kubeshark/kubeshark/releases/download/vX.Y.Z/kubeshark-mcp_darwin_arm64.mcpb",
"fileSha256": "AUTO_GENERATED",
"transport": { "type": "stdio" }
},
{
"registryType": "mcpb",
"identifier": "https://github.com/kubeshark/kubeshark/releases/download/vX.Y.Z/kubeshark-mcp_darwin_amd64.mcpb",
"fileSha256": "AUTO_GENERATED",
"transport": { "type": "stdio" }
},
{
"registryType": "mcpb",
"identifier": "https://github.com/kubeshark/kubeshark/releases/download/vX.Y.Z/kubeshark-mcp_linux_arm64.mcpb",
"fileSha256": "AUTO_GENERATED",
"transport": { "type": "stdio" }
},
{
"registryType": "mcpb",
"identifier": "https://github.com/kubeshark/kubeshark/releases/download/vX.Y.Z/kubeshark-mcp_linux_amd64.mcpb",
"fileSha256": "AUTO_GENERATED",
"transport": { "type": "stdio" }
},
{
"registryType": "mcpb",
"identifier": "https://github.com/kubeshark/kubeshark/releases/download/vX.Y.Z/kubeshark-mcp_windows_amd64.mcpb",
"fileSha256": "AUTO_GENERATED",
"transport": { "type": "stdio" }
}
],
"tools": [
@@ -136,38 +133,14 @@
}
],
"prompts": [
{
"name": "analyze_traffic",
"description": "Analyze API traffic patterns and identify issues"
},
{
"name": "find_errors",
"description": "Find and summarize API errors and failures"
},
{
"name": "trace_request",
"description": "Trace a request path through microservices"
},
{
"name": "show_topology",
"description": "Show service communication topology"
},
{
"name": "latency_analysis",
"description": "Analyze latency patterns and identify slow endpoints"
},
{
"name": "security_audit",
"description": "Audit traffic for security concerns"
},
{
"name": "compare_traffic",
"description": "Compare traffic patterns between time periods"
},
{
"name": "debug_connection",
"description": "Debug connectivity issues between services"
}
{ "name": "analyze_traffic", "description": "Analyze API traffic patterns and identify issues" },
{ "name": "find_errors", "description": "Find and summarize API errors and failures" },
{ "name": "trace_request", "description": "Trace a request path through microservices" },
{ "name": "show_topology", "description": "Show service communication topology" },
{ "name": "latency_analysis", "description": "Analyze latency patterns and identify slow endpoints" },
{ "name": "security_audit", "description": "Audit traffic for security concerns" },
{ "name": "compare_traffic", "description": "Compare traffic patterns between time periods" },
{ "name": "debug_connection", "description": "Debug connectivity issues between services" }
],
"configuration": {
"properties": {