Files
kubeshark/integration/common_test.go
Alon Girmonsky 2ccd716a68 Add MCP registry metadata for official registry submission (#1835)
* Add MCP (Model Context Protocol) server command

Implement `kubeshark mcp` command that runs an MCP server over stdio,
enabling AI assistants to query Kubeshark's network visibility data.

Features:
- MCP protocol implementation (JSON-RPC 2.0 over stdio)
- Dynamic tool discovery from Hub's /api/mcp endpoint
- Local cluster management tools (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- --url flag for direct connection to existing Kubeshark deployment
- --kubeconfig flag for proxy mode with kubectl
- --allow-destructive flag to enable start/stop operations (safe by default)
- --list-tools flag to display available tools
- --mcp-config flag to generate MCP client configuration
- 5-minute cache TTL for Hub tools/prompts
- Prompts for common analysis tasks

* Address code review comments for MCP implementation

- Add 30s timeout to HTTP client to prevent hanging requests
- Add scanner.Err() check after stdin processing loop
- Close HTTP response bodies to prevent resource leaks
- Add goroutine to wait on started process to prevent zombies
- Simplify polling loop by removing ineffective context check
- Advertise check_kubeshark_status in URL mode (was callable but hidden)
- Update documentation to clarify URL mode only disables start/stop

* Fix lint errors in mcpRunner.go

- Use type conversion instead of struct literals for hubMCPTool -> mcpTool
  and hubMCPPromptArg -> mcpPromptArg (S1016 gosimple)
- Lowercase error string to follow Go conventions (ST1005 staticcheck)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add MCP server unit tests

Comprehensive unit tests for the MCP server implementation:
- Protocol tests (initialize, tools/list, tools/call, prompts/list, prompts/get)
- Tool tests (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- Hub integration tests (tool fetching, caching, prompt handling)
- Error handling tests
- Edge case tests

* Fix MCP unit tests to use correct /tools/call endpoint

- Update all Hub tool tests to use POST /tools/call endpoint instead
  of individual paths like /workloads, /calls, /stats
- Verify arguments in POST body instead of URL query parameters
- Add newMockHubHandler helper for proper Hub endpoint mocking
- Split TestMCP_ToolsList into three tests:
  - TestMCP_ToolsList_CLIOnly: Tests without Hub backend
  - TestMCP_ToolsList_WithDestructive: Tests with destructive flag
  - TestMCP_ToolsList_WithHubBackend: Tests with mock Hub providing tools
- Fix TestMCP_FullConversation to mock Hub MCP endpoint correctly
- Rename URL encoding tests for clarity
- All tests now correctly reflect the implementation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Simplify MCP unit tests

- Remove section header comments (10 headers)
- Consolidate similar tests using table-driven patterns
- Simplify test assertions with more concise checks
- Combine edge case tests into single test function
- Reduce verbose test structures

Total reduction: 1477 → 495 lines (66%)
All 24 tests still pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add MCP integration test framework

Add integration tests that run against a real Kubernetes cluster:
- MCP protocol tests (initialize, tools/list, prompts/list)
- Cluster management tests (check_kubeshark_status, start_kubeshark, stop_kubeshark)
- Full lifecycle test (check -> start -> check -> stop -> check)
- API tools tests (list_workloads, list_api_calls, get_api_stats)

Also includes:
- Makefile targets for running integration tests
- Test helper functions (startMCPSession, cleanupKubeshark, etc.)
- Documentation (README.md, TEMPLATE.md, ISSUE_TEMPLATE.md)

* Address review comments on integration tests

Makefile:
- Use unique temporary files (mktemp) instead of shared /tmp/integration-test.log
  to prevent race conditions when multiple test targets run concurrently
- Remove redundant test-integration-verbose target (test-integration already uses -v)
- Add cleanup (rm -f) for temporary log files

integration/mcp_test.go:
- Capture stderr from MCP server for debugging failures
- Add getStderr() method to mcpSession for accessing captured stderr
- Fix potential goroutine leak by adding return statements after t.Fatalf
- Remove t.Run subtests in TestMCP_APIToolsRequireKubeshark to clarify
  sequential execution with shared session
- Fix benchmark to use getKubesharkBinary helper for consistency
- Add Kubernetes cluster check to benchmark (graceful skip)
- Add proper error handling for pipe creation in benchmark
- Remove unnecessary bytes import workaround (now actually used for stderr)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Simplify and clean up MCP integration tests

- Remove unrelated L4 viewer files (1239 lines)
- Remove template/issue documentation files (419 lines)
- Trim README to essential content only
- Remove TEMPLATE comments from common_test.go
- Add initialize() helper to reduce test boilerplate
- Add hasKubernetesCluster() helper for benchmarks
- Simplify all test functions with consistent patterns

Total reduction: 2964 → 866 lines (71%)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add MCP registry metadata for official registry submission

Add metadata files for submitting Kubeshark MCP server to the official
MCP registry at registry.modelcontextprotocol.io:

- mcp/server.json: Registry metadata with tools, prompts, and configuration
- mcp/README.md: MCP server documentation and usage guide

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 10:39:42 -08:00

218 lines
5.5 KiB
Go

//go:build integration
// Package integration contains integration tests that run against a real Kubernetes cluster.
// Run with: go test -tags=integration ./integration/...
package integration
import (
"bytes"
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"sync"
"testing"
"time"
)
const (
binaryName = "kubeshark"
defaultTimeout = 2 * time.Minute
startupTimeout = 3 * time.Minute
)
var (
// binaryPath caches the built binary path
binaryPath string
buildOnce sync.Once
buildErr error
)
// requireKubernetesCluster skips the test if no Kubernetes cluster is available.
func requireKubernetesCluster(t *testing.T) {
t.Helper()
if !hasKubernetesCluster() {
t.Skip("Skipping: no Kubernetes cluster available")
}
}
// hasKubernetesCluster returns true if a Kubernetes cluster is accessible.
func hasKubernetesCluster() bool {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
return exec.CommandContext(ctx, "kubectl", "cluster-info").Run() == nil
}
// getKubesharkBinary returns the path to the kubeshark binary, building it if necessary.
func getKubesharkBinary(t *testing.T) string {
t.Helper()
// Check if binary path is provided via environment
if envBinary := os.Getenv("KUBESHARK_BINARY"); envBinary != "" {
if _, err := os.Stat(envBinary); err == nil {
return envBinary
}
t.Fatalf("KUBESHARK_BINARY set but file not found: %s", envBinary)
}
// Build once per test run
buildOnce.Do(func() {
binaryPath, buildErr = buildBinary(t)
})
if buildErr != nil {
t.Fatalf("Failed to build binary: %v", buildErr)
}
return binaryPath
}
// buildBinary compiles the binary and returns its path.
func buildBinary(t *testing.T) (string, error) {
t.Helper()
// Find project root (directory containing go.mod)
projectRoot, err := findProjectRoot()
if err != nil {
return "", fmt.Errorf("finding project root: %w", err)
}
outputPath := filepath.Join(projectRoot, "bin", binaryName+"_integration_test")
t.Logf("Building %s binary at %s", binaryName, outputPath)
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
defer cancel()
cmd := exec.CommandContext(ctx, "go", "build",
"-o", outputPath,
filepath.Join(projectRoot, binaryName+".go"),
)
cmd.Dir = projectRoot
output, err := cmd.CombinedOutput()
if err != nil {
return "", fmt.Errorf("build failed: %w\nOutput: %s", err, output)
}
return outputPath, nil
}
// findProjectRoot locates the project root by finding go.mod
func findProjectRoot() (string, error) {
dir, err := os.Getwd()
if err != nil {
return "", err
}
for {
if _, err := os.Stat(filepath.Join(dir, "go.mod")); err == nil {
return dir, nil
}
parent := filepath.Dir(dir)
if parent == dir {
return "", fmt.Errorf("could not find go.mod in any parent directory")
}
dir = parent
}
}
// runKubeshark executes the kubeshark binary with the given arguments.
// Returns combined stdout/stderr and any error.
func runKubeshark(t *testing.T, binary string, args ...string) (string, error) {
t.Helper()
return runKubesharkWithTimeout(t, binary, defaultTimeout, args...)
}
// runKubesharkWithTimeout executes the kubeshark binary with a custom timeout.
func runKubesharkWithTimeout(t *testing.T, binary string, timeout time.Duration, args ...string) (string, error) {
t.Helper()
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
t.Logf("Running: %s %s", binary, strings.Join(args, " "))
cmd := exec.CommandContext(ctx, binary, args...)
var stdout, stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
err := cmd.Run()
output := stdout.String()
if stderr.Len() > 0 {
output += "\n[stderr]\n" + stderr.String()
}
if ctx.Err() == context.DeadlineExceeded {
return output, fmt.Errorf("command timed out after %v", timeout)
}
return output, err
}
// cleanupKubeshark ensures Kubeshark is not running in the cluster.
func cleanupKubeshark(t *testing.T, binary string) {
t.Helper()
if os.Getenv("INTEGRATION_SKIP_CLEANUP") == "true" {
t.Log("Skipping cleanup (INTEGRATION_SKIP_CLEANUP=true)")
return
}
t.Log("Cleaning up any existing Kubeshark installation...")
// Run clean command, ignore errors (might not be installed)
_, _ = runKubeshark(t, binary, "clean")
// Wait a moment for resources to be deleted
time.Sleep(2 * time.Second)
}
// waitForKubesharkReady waits for Kubeshark to be ready after starting.
func waitForKubesharkReady(t *testing.T, binary string, timeout time.Duration) error {
t.Helper()
t.Log("Waiting for Kubeshark to be ready...")
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
// Check if pods are running
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
cmd := exec.CommandContext(ctx, "kubectl", "get", "pods", "-l", "app.kubernetes.io/name=kubeshark", "-o", "jsonpath={.items[*].status.phase}")
output, err := cmd.Output()
cancel()
if err == nil && strings.Contains(string(output), "Running") {
t.Log("Kubeshark is ready")
return nil
}
time.Sleep(5 * time.Second)
}
return fmt.Errorf("timeout waiting for Kubeshark to be ready")
}
// isKubesharkRunning checks if Kubeshark is currently running in the cluster.
func isKubesharkRunning(t *testing.T) bool {
t.Helper()
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "kubectl", "get", "pods", "-l", "app.kubernetes.io/name=kubeshark", "-o", "name")
output, err := cmd.Output()
if err != nil {
return false
}
return strings.TrimSpace(string(output)) != ""
}