Files
kubeshark/api/pkg/resolver
Alex Haiut 2e87a01346 end of week - develop to master (#50)
* Provide cli version as git hash from makefile

* Update Makefile, version.go, and 3 more files...

* Update mizuRunner.go

* Update README.md, resolver.go, and 2 more files...

* Update provider.go

* Feature/UI/light theme (#44)

* light theme

* css polish

* unused code

* css

* text shadow

* footer style

* Update mizuRunner.go

* Handle nullable vars (#47)

* Decode gRPC body (#48)

* Decode grpc.

* Better variable names.

* Added protobuf-decoder dependency.

* Updated protobuf-decoder's version.

Co-authored-by: RamiBerm <rami.berman@up9.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: lirazyehezkel <61656597+lirazyehezkel@users.noreply.github.com>
Co-authored-by: nimrod-up9 <59927337+nimrod-up9@users.noreply.github.com>
2021-05-13 20:29:31 +03:00
..

Usage

Full example

errOut := make(chan error, 100)
k8sResolver, err := resolver.NewFromOutOfCluster("", errOut)
if err != nil {
    fmt.Printf("error creating k8s resolver %s", err)
}

ctx, cancel := context.WithCancel(context.Background())
k8sResolver.Start(ctx)

resolvedName := k8sResolver.Resolve("10.107.251.91") // will always return `nil` in real scenarios as the internal map takes a moment to populate after `Start` is called
if resolvedName != nil {
    fmt.Printf("resolved 10.107.251.91=%s", *resolvedName)
} else {
    fmt.Printf("Could not find a resolved name for 10.107.251.91")
}

for {
    select {
        case err := <- errOut:
            fmt.Printf("name resolving error %s", err)
    }
}

In cluster authentication

Create resolver using the function NewFromInCluster(errOut chan error)

Out of cluster authentication

Create resolver using the function NewFromOutOfCluster(kubeConfigPath string, errOut chan error)

the kubeConfigPath param is optional, pass an empty string "" for resolver to auto locate the default kubeconfig file

Error handling

Please ensure there is always a thread reading from the errOut channel, not doing so will result in the resolver threads getting blocked and the resolver will fail to update.

Also note that any error you receive through this channel does not necessarily mean that resolver is no longer running. the resolver will infinitely retry watching k8s resources until the provided context is cancelled.