This namely involved importing new libraries and using the new Clientset.
Changes worth mentioning:
* The new kubernetes library doesn't provide StoreToLister wrappers, so now I am going the casting directly.
* Deleting the pods and getting their logs is done in a cleaner way (using the
Clientset instead of the lower-level RESTclient).
Since there are multiple types in the same topology, displaying the type is important.
We do this in multiple places:
* Add node type to minor label
* Add node type as metadata and include in metadata template.
Even though this will always be the same for every node of that topology, this was
the easiest way to add it so it displays in the table view.
Note we can't control ordering of columns in table view, it's always alphabetical.
Changed default for flag `-app.docker` to use the DOCKER_* env variables
instead of hardcoded /var/run/docker.sock; uses docker's default if
no DOCKER_HOST defined, for both probe and app.
Fixes#1975
ProcNet.Next does not allocate Connection structs, for efficiency.
Instead it always returns a *Connection pointing to the same instance.
As a result, any mutations by the caller to struct elements that
aren't actually set by ProcNet.Next, in particular Connection.Proc,
are carried across to subsequent calls.
This had hilarious consequences: connections referencing an inode
which we hadn't come across during proc walking would be associated
with the process corresponding to the last successfully looked up
inode.
The fix is to clear out the garbage left over from previous calls.
Fixes#2638.
the information is constant and already present in the id, so we can
extract it from there.
That reduces the report size and improves report encoding/decoding
performance. It should reduce memory usage too and improve report
merging performance too.
NB: Probes with this change are incompatible with old apps.
- eliminate the code duplication when falling back to procfs scanning
- trim some superfluous comments
Also fix a bug in the procvess: when falling back to procfs scanning
in ReportConnections, the scanner was given a "--any-nat" param, which
is wrong.
Since https://github.com/weaveworks/tcptracer-bpf/pull/39, tcptracer-bpf
can generate "fd_install" events when a process installs a new file
descriptor in its fd table. Those events must be requested explicitely
on a per-pid basis with tracer.AddFdInstallWatcher(pid).
This is useful to know about "accept" events that would otherwise be
missed because kretprobes are not triggered for functions that were
called before the installation of the kretprobe.
This patch find all the processes that are currently blocked on an
accept() syscall during the EbpfTracker initialization.
feedInitialConnections() will use tracer.AddFdInstallWatcher() to
subscribe to fd_install events. When a fd_install event is received,
synthesise an accept event with the connection tuple and the network
namespace (from /proc).