Compare commits

..

8 Commits

Author SHA1 Message Date
Alon Girmonsky
81d5ca8ffc Enable MongoDB protocol dissector
Add mongodb to the enabled dissectors list and port mapping (27017)
in both Go config defaults and Helm chart values.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 12:39:44 -07:00
Alon Girmonsky
1ba6ed94e0 💄 Improve README with AI skills, KFL semantics, and cloud storage (#1892)
* 💄 Improve README with AI skills, KFL semantics image, and cloud storage

- Add AI Skills section with Network RCA and KFL skills, Claude Code plugin install
- Rename "Network Traffic Indexing" to "Query with API, Kubernetes, and Network Semantics" with new KFL semantics image showing how a single query combines all three layers
- Add cloud storage providers (S3, Azure Blob, GCS) and decrypted TLS to Traffic Retention section
- Update Features table: add AI Skills, KFL query language, cloud storage, delayed indexing

* 🔒 Add encrypted traffic visibility to README "What you can do" section

* 🎨 Update snapshots image in README

---------

Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
2026-04-02 18:38:13 -07:00
Alon Girmonsky
4695acb41e 🐛 Fix release-pr Makefile target cleanup and macOS sed compatibility (#1890)
- Fix macOS sed -i requiring empty backup extension argument
- Checkout master after creating kubeshark release PR
- Checkout master in kubeshark.github.io before and after creating helm PR
- Run all kubeshark.github.io operations in a single shell to avoid lost cd context

Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
2026-03-31 12:05:21 -07:00
Alon Girmonsky
b80723edfb 🔖 Bump the Helm chart version to 53.2.0 (#1889)
Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
2026-03-31 11:30:42 -07:00
Alon Girmonsky
ddc2e57f12 Network RCA skill: use local timezone instead of UTC (#1880)
* Use local timezone instead of UTC in Network RCA skill output

Add a Timezone Handling section that instructs the agent to detect the
local timezone, present local time as the primary reference with UTC in
parentheses, and convert UTC tool responses before presenting to users.
Update all example timestamps to demonstrate the local+UTC format.

Closes #1879

* Ensure agent proactively starts dissection for workload/API queries

The agent was waiting for dissection to complete without ever starting it.
Add explicit instructions: check dissection status first, start it if
missing, and default to the Dissection route for any non-PCAP question.
Only PCAP-specific requests can skip dissection.

* Translate every API/Kubernetes question into a fresh list_api_calls query

Add "Every Question Is a Query" section: each user prompt with API or
Kubernetes semantics should map to a list_api_calls call with the
appropriate KFL filter. Includes examples of natural language to KFL
translation. Agent should never answer from memory or stale results.

---------

Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
2026-03-24 12:03:05 -07:00
Alon Girmonsky
e80fc3319b Revamp README descriptions and structure (#1881)
* Revamp README intro, sections, and descriptions

Rewrite the opening description to focus on indexing and querying.
Replace "What's captured" with actionable "What you can do" bullets.
Add port-forward step and ingress recommendation to Get Started.
Rename and tighten section descriptions: Network Data for AI Agents,
Network Traffic Indexing, Workload Dependency Map, Traffic Retention
& PCAP Export.

* Remove Raw Capture from features table
2026-03-23 08:33:27 -07:00
Volodymyr Stoiko
868b4c1f36 Verify hub/front pods are ready by conditions (#1864)
* Verify hub/front pods are ready by conditions

* log waiting for readiness

* proper sync

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2026-03-21 17:33:48 -07:00
Serhii Ponomarenko
c63740ec45 🐛 Fix dissection-control front env logic (#1878) 2026-03-20 08:20:53 -07:00
11 changed files with 272 additions and 157 deletions

View File

@@ -268,7 +268,7 @@ release-pr: ## Step 1: Tag sibling repos, bump version, create release PR.
@cd ../hub && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../front && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../kubeshark && git checkout master && git pull
@sed -i "s/^version:.*/version: \"$(shell echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+)\..*/\1/')\"/" helm-chart/Chart.yaml
@sed -i '' "s/^version:.*/version: \"$(shell echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+)\..*/\1/')\"/" helm-chart/Chart.yaml
@$(MAKE) build VER=$(VERSION)
@if [ "$(shell uname)" = "Darwin" ]; then \
codesign --sign - --force --preserve-metadata=entitlements,requirements,flags,runtime ./bin/kubeshark__; \
@@ -282,10 +282,12 @@ release-pr: ## Step 1: Tag sibling repos, bump version, create release PR.
--body "Automated release PR for v$(VERSION)." \
--base master \
--reviewer corest
@rm -rf ../kubeshark.github.io/charts/chart
@mkdir ../kubeshark.github.io/charts/chart
@cp -r helm-chart/ ../kubeshark.github.io/charts/chart/
@cd ../kubeshark.github.io && git checkout master && git pull \
@git checkout master && git pull
@cd ../kubeshark.github.io \
&& git checkout master && git pull \
&& rm -rf charts/chart \
&& mkdir charts/chart \
&& cp -r ../kubeshark/helm-chart/ charts/chart/ \
&& git checkout -b helm-v$(VERSION) \
&& git add -A . \
&& git commit -m ":sparkles: Update the Helm chart to v$(VERSION)" \
@@ -293,8 +295,8 @@ release-pr: ## Step 1: Tag sibling repos, bump version, create release PR.
&& gh pr create --title ":sparkles: Helm chart v$(VERSION)" \
--body "Update Helm chart for release v$(VERSION)." \
--base master \
--reviewer corest
@cd ../kubeshark
--reviewer corest \
&& git checkout master
@echo ""
@echo "Release PRs created:"
@echo " - kubeshark: Review and merge the release PR."

View File

@@ -17,17 +17,14 @@
---
Kubeshark captures cluster-wide network traffic at the speed and scale of Kubernetes, continuously, at the kernel level using eBPF. It consolidates a highly fragmented picture — dozens of nodes, thousands of workloads, millions of connections — into a single, queryable view with full Kubernetes and API context.
Kubeshark indexes cluster-wide network traffic at the kernel level using eBPF — delivering instant answers to any query using network, API, and Kubernetes semantics.
Network data is available to **AI agents via [MCP](https://docs.kubeshark.com/en/mcp)** and to **human operators via a [dashboard](https://docs.kubeshark.com/en/v2)**.
**What you can do:**
**What's captured, cluster-wide:**
- **L4 Packets & TCP Metrics** — retransmissions, RTT, window saturation, connection lifecycle, packet loss across every node-to-node path ([TCP insights →](https://docs.kubeshark.com/en/mcp/tcp_insights))
- **L7 API Calls** — real-time request/response matching with full payload parsing: HTTP, gRPC, GraphQL, Redis, Kafka, DNS ([API dissection →](https://docs.kubeshark.com/en/v2/l7_api_dissection))
- **Decrypted TLS** — eBPF-based TLS decryption without key management
- **Kubernetes Context** — every packet and API call resolved to pod, service, namespace, and node
- **PCAP Retention** — point-in-time raw packet snapshots, exportable for Wireshark ([Snapshots →](https://docs.kubeshark.com/en/v2/traffic_snapshots))
- **Download Retrospective PCAPs** — cluster-wide packet captures filtered by nodes, time, workloads, and IPs. Store PCAPs for long-term retention and later investigation.
- **Visualize Network Data** — explore traffic matching queries with API, Kubernetes, or network semantics through a real-time dashboard.
- **See Encrypted Traffic in Plain Text** — automatically decrypt TLS/mTLS traffic using eBPF, with no key management or sidecars required.
- **Integrate with AI** — connect your favorite AI assistant (e.g. Claude, Copilot) to include network data in AI-driven workflows like incident response and root cause analysis.
![Kubeshark](https://github.com/kubeshark/assets/raw/master/png/stream.png)
@@ -38,9 +35,12 @@ Network data is available to **AI agents via [MCP](https://docs.kubeshark.com/en
```bash
helm repo add kubeshark https://helm.kubeshark.com
helm install kubeshark kubeshark/kubeshark
kubectl port-forward svc/kubeshark-front 8899:80
```
Dashboard opens automatically. You're capturing traffic.
Open `http://localhost:8899` in your browser. You're capturing traffic.
> For production use, we recommend using an [ingress controller](https://docs.kubeshark.com/en/ingress) instead of port-forward.
**Connect an AI agent** via MCP:
@@ -53,9 +53,9 @@ claude mcp add kubeshark -- kubeshark mcp
---
### AI-Powered Network Analysis
### Network Data for AI Agents
Kubeshark exposes all cluster-wide network data via MCP (Model Context Protocol). AI agents can query L4 metrics, investigate L7 API calls, analyze traffic patterns, and run root cause analysis through natural language. Use cases include incident response, root cause analysis, troubleshooting, debugging, and reliability workflows.
Kubeshark exposes cluster-wide network data via [MCP](https://docs.kubeshark.com/en/mcp) — enabling AI agents to query traffic, investigate API calls, and perform root cause analysis through natural language.
> *"Why did checkout fail at 2:15 PM?"*
> *"Which services have error rates above 1%?"*
@@ -68,31 +68,51 @@ Works with Claude Code, Cursor, and any MCP-compatible AI.
[MCP setup guide →](https://docs.kubeshark.com/en/mcp)
### AI Skills
Open-source, reusable skills that teach AI agents domain-specific workflows on top of Kubeshark's MCP tools:
| Skill | Description |
|-------|-------------|
| **[Network RCA](skills/network-rca/)** | Retrospective root cause analysis — snapshots, dissection, PCAP extraction, trend comparison |
| **[KFL](skills/kfl/)** | KFL (Kubeshark Filter Language) expert — writes, debugs, and optimizes traffic filters |
Install as a Claude Code plugin:
```
/plugin marketplace add kubeshark/kubeshark
/plugin install kubeshark
```
Or clone and use directly — skills trigger automatically based on conversation context.
[AI Skills docs →](https://docs.kubeshark.com/en/mcp/skills)
---
### L7 API Dissection
### Query with API, Kubernetes, and Network Semantics
Cluster-wide request/response matching with full payloads, parsed according to protocol specifications. HTTP, gRPC, Redis, Kafka, DNS, and more. Every API call resolved to source and destination pod, service, namespace, and node. No code instrumentation required.
Kubeshark indexes cluster-wide network traffic by parsing it according to protocol specifications, with support for HTTP, gRPC, Redis, Kafka, DNS, and more. A single [KFL query](https://docs.kubeshark.com/en/v2/kfl2) can combine all three semantic layers — Kubernetes identity, API context, and network attributes — to pinpoint exactly the traffic you need. No code instrumentation required.
![API context](https://github.com/kubeshark/assets/raw/master/png/api_context.png)
![KFL query combining API, Kubernetes, and network semantics](https://github.com/kubeshark/assets/raw/master/png/kfl-semantics.png)
[Learn more](https://docs.kubeshark.com/en/v2/l7_api_dissection)
[KFL reference →](https://docs.kubeshark.com/en/v2/kfl2) · [Traffic indexing](https://docs.kubeshark.com/en/v2/l7_api_dissection)
### L4/L7 Workload Map
### Workload Dependency Map
Cluster-wide view of service communication: dependencies, traffic flow, and anomalies across all nodes and namespaces.
A visual map of how workloads communicate, showing dependencies, traffic volume, and protocol usage across the cluster.
![Service Map](https://github.com/kubeshark/assets/raw/master/png/servicemap.png)
[Learn more →](https://docs.kubeshark.com/en/v2/service_map)
### Traffic Retention
### Traffic Retention & PCAP Export
Continuous raw packet capture with point-in-time snapshots. Export PCAP files for offline analysis with Wireshark or other tools.
Capture and retain raw network traffic cluster-wide, including decrypted TLS. Download PCAPs scoped by time range, nodes, workloads, and IPs — ready for Wireshark or any PCAP-compatible tool. Store snapshots in cloud storage (S3, Azure Blob, GCS) for long-term retention and cross-cluster sharing.
![Traffic Retention](https://github.com/kubeshark/assets/raw/master/png/snapshots.png)
![Traffic Retention](https://github.com/kubeshark/assets/raw/master/png/snapshots-list.png)
[Snapshots guide →](https://docs.kubeshark.com/en/v2/traffic_snapshots)
[Snapshots guide →](https://docs.kubeshark.com/en/v2/traffic_snapshots) · [Cloud storage →](https://docs.kubeshark.com/en/snapshots_cloud_storage)
---
@@ -100,13 +120,12 @@ Continuous raw packet capture with point-in-time snapshots. Export PCAP files fo
| Feature | Description |
|---------|-------------|
| [**Raw Capture**](https://docs.kubeshark.com/en/v2/raw_capture) | Continuous cluster-wide packet capture with minimal overhead |
| [**Traffic Snapshots**](https://docs.kubeshark.com/en/v2/traffic_snapshots) | Point-in-time snapshots, export as PCAP for Wireshark |
| [**L7 API Dissection**](https://docs.kubeshark.com/en/v2/l7_api_dissection) | Request/response matching with full payloads and protocol parsing |
| [**Traffic Snapshots**](https://docs.kubeshark.com/en/v2/traffic_snapshots) | Point-in-time snapshots with cloud storage (S3, Azure Blob, GCS), PCAP export for Wireshark |
| [**Traffic Indexing**](https://docs.kubeshark.com/en/v2/l7_api_dissection) | Real-time and delayed L7 indexing with request/response matching and full payloads |
| [**Protocol Support**](https://docs.kubeshark.com/en/protocols) | HTTP, gRPC, GraphQL, Redis, Kafka, DNS, and more |
| [**TLS Decryption**](https://docs.kubeshark.com/en/encrypted_traffic) | eBPF-based decryption without key management |
| [**AI-Powered Analysis**](https://docs.kubeshark.com/en/v2/ai_powered_analysis) | Query cluster-wide network data with Claude, Cursor, or any MCP-compatible AI |
| [**Display Filters**](https://docs.kubeshark.com/en/v2/kfl2) | Wireshark-inspired display filters for precise traffic analysis |
| [**TLS Decryption**](https://docs.kubeshark.com/en/encrypted_traffic) | eBPF-based decryption without key management, included in snapshots |
| [**AI Integration**](https://docs.kubeshark.com/en/mcp) | MCP server + open-source AI skills for network RCA and traffic filtering |
| [**KFL Query Language**](https://docs.kubeshark.com/en/v2/kfl2) | CEL-based query language with Kubernetes, API, and network semantics |
| [**100% On-Premises**](https://docs.kubeshark.com/en/air_gapped) | Air-gapped support, no external dependencies |
---

View File

@@ -40,9 +40,11 @@ type Readiness struct {
}
var ready *Readiness
var proxyOnce sync.Once
func tap() {
ready = &Readiness{}
proxyOnce = sync.Once{}
state.startTime = time.Now()
log.Info().Str("registry", config.Config.Tap.Docker.Registry).Str("tag", config.Config.Tap.Docker.Tag).Msg("Using Docker:")
@@ -147,11 +149,21 @@ func printNoPodsFoundSuggestion(targetNamespaces []string) {
log.Warn().Msg(fmt.Sprintf("Did not find any currently running pods that match the regex argument, %s will automatically target matching pods if any are created later%s", misc.Software, suggestionStr))
}
func isPodReady(pod *core.Pod) bool {
for _, condition := range pod.Status.Conditions {
if condition.Type == core.PodReady {
return condition.Status == core.ConditionTrue
}
}
return false
}
func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.HubPodName))
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
isPodReady := false
podReady := false
podRunning := false
timeAfter := time.After(120 * time.Second)
for {
@@ -183,26 +195,30 @@ func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, c
Interface("containers-statuses", modifiedPod.Status.ContainerStatuses).
Msg("Watching pod.")
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
isPodReady = true
if isPodReady(modifiedPod) && !podReady {
podReady = true
ready.Lock()
ready.Hub = true
ready.Unlock()
log.Info().Str("pod", kubernetes.HubPodName).Msg("Ready.")
} else if modifiedPod.Status.Phase == core.PodRunning && !podRunning {
podRunning = true
log.Info().Str("pod", kubernetes.HubPodName).Msg("Waiting for readiness...")
}
ready.Lock()
proxyDone := ready.Proxy
hubPodReady := ready.Hub
frontPodReady := ready.Front
ready.Unlock()
if !proxyDone && hubPodReady && frontPodReady {
ready.Lock()
ready.Proxy = true
ready.Unlock()
postFrontStarted(ctx, kubernetesProvider, cancel)
if hubPodReady && frontPodReady {
proxyOnce.Do(func() {
ready.Lock()
ready.Proxy = true
ready.Unlock()
postFrontStarted(ctx, kubernetesProvider, cancel)
})
}
case kubernetes.EventBookmark:
break
@@ -223,7 +239,7 @@ func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, c
cancel()
case <-timeAfter:
if !isPodReady {
if !podReady {
log.Error().
Str("pod", kubernetes.HubPodName).
Msg("Pod was not ready in time.")
@@ -242,7 +258,8 @@ func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider,
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.FrontPodName))
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
isPodReady := false
podReady := false
podRunning := false
timeAfter := time.After(120 * time.Second)
for {
@@ -274,25 +291,29 @@ func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider,
Interface("containers-statuses", modifiedPod.Status.ContainerStatuses).
Msg("Watching pod.")
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
isPodReady = true
if isPodReady(modifiedPod) && !podReady {
podReady = true
ready.Lock()
ready.Front = true
ready.Unlock()
log.Info().Str("pod", kubernetes.FrontPodName).Msg("Ready.")
} else if modifiedPod.Status.Phase == core.PodRunning && !podRunning {
podRunning = true
log.Info().Str("pod", kubernetes.FrontPodName).Msg("Waiting for readiness...")
}
ready.Lock()
proxyDone := ready.Proxy
hubPodReady := ready.Hub
frontPodReady := ready.Front
ready.Unlock()
if !proxyDone && hubPodReady && frontPodReady {
ready.Lock()
ready.Proxy = true
ready.Unlock()
postFrontStarted(ctx, kubernetesProvider, cancel)
if hubPodReady && frontPodReady {
proxyOnce.Do(func() {
ready.Lock()
ready.Proxy = true
ready.Unlock()
postFrontStarted(ctx, kubernetesProvider, cancel)
})
}
case kubernetes.EventBookmark:
break
@@ -312,7 +333,7 @@ func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider,
Msg("Failed creating pod.")
case <-timeAfter:
if !isPodReady {
if !podReady {
log.Error().
Str("pod", kubernetes.FrontPodName).
Msg("Pod was not ready in time.")
@@ -429,9 +450,6 @@ func postFrontStarted(ctx context.Context, kubernetesProvider *kubernetes.Provid
watchScripts(ctx, kubernetesProvider, false)
}
if config.Config.Scripting.Console {
go runConsoleWithoutProxy()
}
}
func updateConfig(kubernetesProvider *kubernetes.Provider) {

View File

@@ -128,6 +128,7 @@ func CreateDefaultConfig() ConfigStruct {
"http",
"icmp",
"kafka",
"mongodb",
"redis",
// "sctp",
// "syscall",
@@ -147,6 +148,7 @@ func CreateDefaultConfig() ConfigStruct {
HTTP: []uint16{80, 443, 8080},
AMQP: []uint16{5671, 5672},
KAFKA: []uint16{9092},
MONGODB: []uint16{27017},
REDIS: []uint16{6379},
LDAP: []uint16{389},
DIAMETER: []uint16{3868},

View File

@@ -282,6 +282,7 @@ type PortMapping struct {
HTTP []uint16 `yaml:"http" json:"http"`
AMQP []uint16 `yaml:"amqp" json:"amqp"`
KAFKA []uint16 `yaml:"kafka" json:"kafka"`
MONGODB []uint16 `yaml:"mongodb" json:"mongodb"`
REDIS []uint16 `yaml:"redis" json:"redis"`
LDAP []uint16 `yaml:"ldap" json:"ldap"`
DIAMETER []uint16 `yaml:"diameter" json:"diameter"`

View File

@@ -1,6 +1,6 @@
apiVersion: v2
name: kubeshark
version: "53.1.0"
version: "53.2.0"
description: The API Traffic Analyzer for Kubernetes
home: https://kubeshark.com
keywords:

View File

@@ -70,7 +70,7 @@ spec:
value: '{{- if and (not .Values.demoModeEnabled) (not .Values.tap.capture.dissection.enabled) -}}
true
{{- else -}}
{{ not (default false .Values.demoModeEnabled) | ternary false true }}
{{ (default false .Values.demoModeEnabled) | ternary false true }}
{{- end -}}'
- name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
value: '{{- if or (and .Values.cloudLicenseEnabled (not (empty .Values.license))) (not .Values.internetConnectivity) -}}

View File

@@ -205,6 +205,7 @@ tap:
- http
- icmp
- kafka
- mongodb
- redis
- ws
- ldap
@@ -224,6 +225,8 @@ tap:
- 5672
kafka:
- 9092
mongodb:
- 27017
redis:
- 6379
ldap:

View File

@@ -4,10 +4,10 @@ apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-hub-network-policy
namespace: default
@@ -33,10 +33,10 @@ apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front-network-policy
@@ -60,10 +60,10 @@ apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-dex-network-policy
@@ -87,10 +87,10 @@ apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-worker-network-policy
@@ -116,10 +116,10 @@ apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-service-account
namespace: default
@@ -132,10 +132,10 @@ metadata:
namespace: default
labels:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
stringData:
LICENSE: ''
@@ -151,10 +151,10 @@ metadata:
namespace: default
labels:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_CRT: |
@@ -167,10 +167,10 @@ metadata:
namespace: default
labels:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_KEY: |
@@ -182,10 +182,10 @@ metadata:
name: kubeshark-nginx-config-map
namespace: default
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
data:
default.conf: |
@@ -248,10 +248,10 @@ metadata:
namespace: default
labels:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
data:
POD_REGEX: '.*'
@@ -276,9 +276,9 @@ data:
AUTH_OIDC_BYPASS_SSL_CA_CHECK: 'false'
TELEMETRY_DISABLED: 'false'
SCRIPTING_DISABLED: 'false'
TARGETED_PODS_UPDATE_DISABLED: ''
TARGETED_PODS_UPDATE_DISABLED: 'false'
PRESET_FILTERS_CHANGING_ENABLED: 'true'
RECORDING_DISABLED: ''
RECORDING_DISABLED: 'false'
DISSECTION_CONTROL_ENABLED: 'true'
GLOBAL_FILTER: ""
DEFAULT_FILTER: ""
@@ -292,6 +292,8 @@ data:
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,ws,ldap,radius,diameter,udp-flow,tcp-flow,udp-conn,tcp-conn'
CUSTOM_MACROS: '{"https":"tls and (http or http2)"}'
DISSECTORS_UPDATING_ENABLED: 'true'
SNAPSHOTS_UPDATING_ENABLED: 'true'
DEMO_MODE_ENABLED: 'false'
DETECT_DUPLICATES: 'false'
PCAP_DUMP_ENABLE: 'false'
PCAP_TIME_INTERVAL: '1m'
@@ -306,10 +308,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-cluster-role-default
namespace: default
@@ -353,10 +355,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-cluster-role-binding-default
namespace: default
@@ -374,10 +376,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role
@@ -424,10 +426,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role-binding
@@ -447,10 +449,10 @@ kind: Service
metadata:
labels:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-hub
namespace: default
@@ -468,10 +470,10 @@ apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-front
namespace: default
@@ -489,10 +491,10 @@ kind: Service
apiVersion: v1
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
annotations:
prometheus.io/scrape: 'true'
@@ -502,10 +504,10 @@ metadata:
spec:
selector:
app.kubeshark.com/app: worker
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
ports:
- name: metrics
@@ -518,10 +520,10 @@ kind: Service
apiVersion: v1
metadata:
labels:
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
annotations:
prometheus.io/scrape: 'true'
@@ -531,10 +533,10 @@ metadata:
spec:
selector:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
ports:
- name: metrics
@@ -549,10 +551,10 @@ metadata:
labels:
app.kubeshark.com/app: worker
sidecar.istio.io/inject: "false"
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-worker-daemon-set
namespace: default
@@ -566,10 +568,10 @@ spec:
metadata:
labels:
app.kubeshark.com/app: worker
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-worker-daemon-set
namespace: kubeshark
@@ -579,7 +581,7 @@ spec:
- /bin/sh
- -c
- mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
image: 'docker.io/kubeshark/worker:v53.1'
image: 'docker.io/kubeshark/worker:v53.2'
imagePullPolicy: Always
name: mount-bpf
securityContext:
@@ -618,7 +620,7 @@ spec:
- '500Mi'
- -cloud-api-url
- 'https://api.kubeshark.com'
image: 'docker.io/kubeshark/worker:v53.1'
image: 'docker.io/kubeshark/worker:v53.2'
imagePullPolicy: Always
name: sniffer
ports:
@@ -690,7 +692,7 @@ spec:
- -disable-tls-log
- -loglevel
- 'warning'
image: 'docker.io/kubeshark/worker:v53.1'
image: 'docker.io/kubeshark/worker:v53.2'
imagePullPolicy: Always
name: tracer
env:
@@ -782,10 +784,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-hub
namespace: default
@@ -800,10 +802,10 @@ spec:
metadata:
labels:
app.kubeshark.com/app: hub
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
spec:
dnsPolicy: ClusterFirstWithHostNet
@@ -819,9 +821,9 @@ spec:
- -capture-stop-after
- "5m"
- -snapshot-size-limit
- ''
- '20Gi'
- -dissector-image
- 'docker.io/kubeshark/worker:v53.1'
- 'docker.io/kubeshark/worker:v53.2'
- -dissector-cpu
- '1'
- -dissector-memory
@@ -843,7 +845,7 @@ spec:
value: 'production'
- name: PROFILING_ENABLED
value: 'false'
image: 'docker.io/kubeshark/hub:v53.1'
image: 'docker.io/kubeshark/hub:v53.2'
imagePullPolicy: Always
readinessProbe:
periodSeconds: 5
@@ -911,10 +913,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.com/app: front
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
name: kubeshark-front
namespace: default
@@ -929,10 +931,10 @@ spec:
metadata:
labels:
app.kubeshark.com/app: front
helm.sh/chart: kubeshark-53.1.0
helm.sh/chart: kubeshark-53.2.0
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "53.1.0"
app.kubernetes.io/version: "53.2.0"
app.kubernetes.io/managed-by: Helm
spec:
containers:
@@ -973,13 +975,19 @@ spec:
value: 'false'
- name: REACT_APP_DISSECTORS_UPDATING_ENABLED
value: 'true'
- name: REACT_APP_SNAPSHOTS_UPDATING_ENABLED
value: 'true'
- name: REACT_APP_DEMO_MODE_ENABLED
value: 'false'
- name: REACT_APP_CLUSTER_WIDE_MAP_ENABLED
value: 'false'
- name: REACT_APP_RAW_CAPTURE_ENABLED
value: 'true'
- name: REACT_APP_SENTRY_ENABLED
value: 'false'
- name: REACT_APP_SENTRY_ENVIRONMENT
value: 'production'
image: 'docker.io/kubeshark/front:v53.1'
image: 'docker.io/kubeshark/front:v53.2'
imagePullPolicy: Always
name: kubeshark-front
livenessProbe:

View File

@@ -125,26 +125,13 @@ Match against any direction (src or dst):
### Labels and Annotations
```
// Direct access — works when the label is expected to exist
local_labels.app == "payment" || remote_labels.app == "payment"
// Safe access with default — use when the label may not exist
map_get(local_labels, "app", "") == "checkout"
map_get(local_labels, "app", "") == "checkout" // Safe access with default
map_get(remote_labels, "version", "") == "canary"
// Label existence check
"tier" in local_labels
"tier" in local_labels // Label existence check
```
Direct access (`local_labels.app`) returns an error if the key doesn't exist.
Use `map_get()` when you're not sure the label is present on all workloads.
Queries can be as complex as needed — combine labels with any other fields.
Responses are fast because all API elements are indexed:
```
local_labels.app == "payment" && http && status_code >= 500 && dst.pod.namespace == "production"
```
Always use `map_get()` for labels and annotations — direct access like
`local_labels["app"]` errors if the key doesn't exist.
### Node and Process

View File

@@ -29,6 +29,31 @@ Unlike real-time monitoring, retrospective analysis lets you go back in time:
reconstruct what happened, compare against known-good baselines, and pinpoint
root causes with full L4/L7 visibility.
## Timezone Handling
All timestamps presented to the user **must use the local timezone** of the environment
where the agent is running. Users think in local time ("this happened around 3pm"), and
UTC-only output adds friction during incident response when speed matters.
### Rules
1. **Detect the local timezone** at the start of every investigation. Use the system
clock or environment (e.g., `date +%Z` or equivalent) to determine the timezone.
2. **Present local time as the primary reference** in all output — summaries, event
correlations, time-range references, and tables.
3. **Show UTC in parentheses** for clarity, e.g., `15:03:22 IST (12:03:22 UTC)`.
4. **Convert tool responses** — Kubeshark MCP tools return timestamps in UTC. Always
convert these to local time before presenting to the user.
5. **Use local time in natural language** — when describing events, say "the spike at
3:23 PM" not "the spike at 12:23 UTC".
### Snapshot Creation
When creating snapshots, Kubeshark MCP tools accept UTC timestamps. Convert the user's
local time references to UTC before passing them to tools like `create_snapshot` or
`export_snapshot_pcap`. Confirm the converted window with the user if there's any
ambiguity.
## Prerequisites
Before starting any analysis, verify the environment is ready.
@@ -103,6 +128,11 @@ Both routes are valid and complementary. Use PCAP when you need raw packets
for human analysis or compliance. Use Dissection when you want an AI agent
to search and analyze traffic programmatically.
**Default to Dissection.** Unless the user explicitly asks for a PCAP file or
Wireshark export, assume Dissection is needed. Any question about workloads,
APIs, services, pods, error rates, latency, or traffic patterns requires
dissected data.
## Snapshot Operations
Both routes start here. A snapshot is an immutable freeze of all cluster traffic
@@ -116,19 +146,19 @@ Check what raw capture data exists across the cluster. You can only create
snapshots within these boundaries — data outside the window has been rotated
out of the FIFO buffer.
**Example response**:
**Example response** (raw tool output is in UTC — convert to local time before presenting):
```
Cluster-wide:
Oldest: 2026-03-14 16:12:34 UTC
Newest: 2026-03-14 18:05:20 UTC
Oldest: 2026-03-14 18:12:34 IST (16:12:34 UTC)
Newest: 2026-03-14 20:05:20 IST (18:05:20 UTC)
Per node:
┌─────────────────────────────┬────────────────────┐
│ Node │ Oldest │ Newest
├─────────────────────────────┼────────────────────┤
│ ip-10-0-25-170.ec2.internal │ 16:12:34 │ 18:03:39 │
│ ip-10-0-32-115.ec2.internal │ 16:13:45 │ 18:05:20 │
└─────────────────────────────┴────────────────────┘
┌─────────────────────────────┬───────────────────────────────┬───────────────────────────────┐
│ Node │ Oldest │ Newest
├─────────────────────────────┼───────────────────────────────┼───────────────────────────────┤
│ ip-10-0-25-170.ec2.internal │ 18:12:34 IST (16:12:34 UTC) │ 20:03:39 IST (18:03:39 UTC)
│ ip-10-0-32-115.ec2.internal │ 18:13:45 IST (16:13:45 UTC) │ 20:05:20 IST (18:05:20 UTC)
└─────────────────────────────┴───────────────────────────────┴───────────────────────────────┘
```
If the incident falls outside the available window, the data has been rotated
@@ -232,7 +262,30 @@ KFL field names differ from what you might expect (e.g., `status_code` not
`response.status`, `src.pod.namespace` not `src.namespace`). Using incorrect
fields produces wrong results without warning.
### Activate Dissection
### Dissection Is Required — Do Not Skip This
**Any question about workloads, Kubernetes resources, services, pods, namespaces,
or API calls requires dissection.** Only the PCAP route works without it. If the
user asks anything about traffic content, API behavior, error rates, latency,
or service-to-service communication, you **must** ensure dissection is active
before attempting to answer.
**Do not wait for dissection to complete on its own — it will not start by itself.**
Follow this sequence every time before using `list_api_calls`, `get_api_call`,
or `get_api_stats`:
1. **Check status**: Call `get_snapshot_dissection_status` (or `list_snapshot_dissections`)
to see if a dissection already exists for this snapshot.
2. **If dissection exists and is completed** — proceed with your query. No further
action needed.
3. **If dissection is in progress** — wait for it to complete, then proceed.
4. **If no dissection exists** — you **must** call `start_snapshot_dissection` to
trigger it. Then monitor progress with `get_snapshot_dissection_status` until
it completes.
Never assume dissection is running. Never wait for a dissection that was not started.
The agent is responsible for triggering dissection when it is missing.
**Tool**: `start_snapshot_dissection`
@@ -243,6 +296,27 @@ become available:
- `get_api_call` — Drill into a specific call (headers, body, timing, payload)
- `get_api_stats` — Aggregated statistics (throughput, error rates, latency)
### Every Question Is a Query
**Every user prompt that involves APIs, workloads, services, pods, namespaces,
or Kubernetes semantics should translate into a `list_api_calls` call with an
appropriate KFL filter.** Do not answer from memory or prior results — always
run a fresh query that matches what the user is asking.
Examples of user prompts and the queries they should trigger:
| User says | Action |
|---|---|
| "Show me all 500 errors" | `list_api_calls` with KFL: `http && status_code == 500` |
| "What's hitting the payment service?" | `list_api_calls` with KFL: `dst.service.name == "payment-service"` |
| "Any DNS failures?" | `list_api_calls` with KFL: `dns && status_code != 0` |
| "Show traffic from namespace prod to staging" | `list_api_calls` with KFL: `src.pod.namespace == "prod" && dst.pod.namespace == "staging"` |
| "What are the slowest API calls?" | `list_api_calls` with KFL: `http && elapsed_time > 5000000` |
The user's natural language maps to KFL. Your job is to translate intent into
the right filter and run the query — don't summarize old results or speculate
without fresh data.
### Investigation Strategy
Start broad, then narrow:
@@ -255,16 +329,17 @@ Start broad, then narrow:
full payload to understand what went wrong.
4. Use KFL filters to slice by namespace, service, protocol, or any combination.
**Example `list_api_calls` response** (filtered to `http && status_code >= 500`):
**Example `list_api_calls` response** (filtered to `http && status_code >= 500`,
timestamps converted from UTC to local):
```
┌──────────────────────┬────────┬──────────────────────────┬────────┬───────────┐
Timestamp │ Method │ URL │ Status │ Elapsed │
├──────────────────────┼────────┼──────────────────────────┼────────┼───────────┤
│ 2026-03-14 17:23:45 │ POST │ /api/v1/orders/charge │ 503 │ 12,340 ms │
│ 2026-03-14 17:23:46 │ POST │ /api/v1/orders/charge │ 503 │ 11,890 ms │
│ 2026-03-14 17:23:48 │ GET │ /api/v1/inventory/check │ 500 │ 8,210 ms │
│ 2026-03-14 17:24:01 │ POST │ /api/v1/payments/process │ 502 │ 30,000 ms │
└──────────────────────┴────────┴──────────────────────────┴────────┴───────────┘
┌──────────────────────────────────────────┬────────┬──────────────────────────┬────────┬───────────┐
Timestamp │ Method │ URL │ Status │ Elapsed │
├──────────────────────────────────────────┼────────┼──────────────────────────┼────────┼───────────┤
│ 2026-03-14 19:23:45 IST (17:23:45 UTC) │ POST │ /api/v1/orders/charge │ 503 │ 12,340 ms │
│ 2026-03-14 19:23:46 IST (17:23:46 UTC) │ POST │ /api/v1/orders/charge │ 503 │ 11,890 ms │
│ 2026-03-14 19:23:48 IST (17:23:48 UTC) │ GET │ /api/v1/inventory/check │ 500 │ 8,210 ms │
│ 2026-03-14 19:24:01 IST (17:24:01 UTC) │ POST │ /api/v1/payments/process │ 502 │ 30,000 ms │
└──────────────────────────────────────────┴────────┴──────────────────────────┴────────┴───────────┘
Src: api-gateway (prod) → Dst: payment-service (prod)
```