These can be long-running operations, and if the client retries we get
the cancelled one running in parallel with the retry, slowing both down
and making it likely the next one will time out too.
This dependency makes it harder to see the structure of the program,
and sometimes complicates compilation.
Mostly just changing the source of strings that are already exported
from the report package. A few new strings have to be moved there,
plus the function `IsPauseImageName()`.
* Pass Go context down to Renderers
This is useful for cancellation or tracing.
* Add tracing spans to app
Also log things like number of nodes in Map, total number of reports.
...when 'Hide Unconnected' is selected, which is the default.
Here's the problem...
The connectedness filter looks for `is_connected` marks in the Latest
map of the nodes. The mark is added by ColorConnected, which is
invoked by ProcessRenderer. That in turn is the base renderer for
ProcessNameRenderer, which is what the process-by-name view
renders. However, the process2Names mapping does not propagate any
metadata, hence there are no `is_connected` marks on the result
nodes. Consequently they are all filtered out.
The problem was introduced in #3009, when I added the ability for
users to chose whether to show or hide unconnected
processes (previously unconnected processes were always hidden) -
looks like I failed to check that the process-by-name view was working
with the new filter. Oops.
The fix is to move the ColorConnected call from ProcessRenderer to the
higher-level renderers - ProcessWithContainerNameRenderer (which is
what the 'Processes' view renders) and ProcessNameRenderer.
This has the beneficial side effect of improving performance for other
renderers which invoke ProcessRenderer, none of which need
connectedness-coloring.
Fixes#3205
This isn't quite as neat as we'd want to make it, because two of the
three call sites still require a closure, but it's still an
improvement.
Note that every instance of MapEndpoints only ever maps to one
topology, which, furthermore, is a core report topology. Hence we can
just parameterise MapEndpoints with that topology, and then the map
functions can return just the node ids.
After dropping extra metadata in the rest of this PR, our usage of
joinResults.add* only ever ends creating minimal nodes, from just an
id and topology. Hence joinResults.add* can be invoked with simply an
id and topology instead of a generic node creation function.
The 'create' function passed to addChild is only ever invoked when we
cannot find a matching process in the process topology. In these cases
the host and pid will be _all_ the info we are ever going to have,
both of which are incorporated in the id. Node summarisation renders
that as best it can; adding the PID as metadata does not make a
difference but for a line with that info in the detail panel, which
adds nothing since the PID is already included in the label.
It's a dynamic type check of sorts.
In the process we stop abusing ParseNodeID for extracting the host,
which in turn allows us to clarify its purpose.
Since we seed the joinResult with the nodes from the topology we are
mapping to, we know the 'create' function is only called when there is
no node with the specified id.
This neatly makes the 'create' function only do what it says,
i.e. return _new_ nodes.
We used to ignore source endpoints that are associated with multiple
destination endpoints, which is a partial workaround for our inability
to correctly represent two connections from the same source ip/port
but different processes, or the same destination ip/port but different
processes. See #2665.
However, that condition is too coarse. In particular, we end up
ignoring endpoints that are connected to NATed destinations, since the
latter are represented by two (or more) endpoints.
The change here corrects that.
This eliminates the awkward distinction between ProcessRenderer and
ColorConnectedProcessRenderer.
It also ensures that processes resulting from direct rendering of the
process topology (/api/topology/processes is invoking
ProcessWithContainerNameRenderer and /api/topology/processes-by-name
is invoking ProcessNameRenderer) are colored and hence summarising
them correctly sets the 'linkable' property. This was the behaviour
prior to the revamping of the rendering pipeline. However, it doesn't
actually make a practical difference since process detail panels only
show other processes as connection endpoints, and these are always
marked linkable anyway.
This is preparatory to future refactorings: all existing calls are to
Endpoints which have no children and where we don't want a Counter.
We make addChildAndChildren an obvious extension of addChild even
though it adds a dead code path (we never call addChildAndChildren
with an endpoint).
and add a comment indicating non-memoisation of other, not shared
top-level renderers.
This memoisation is effective when the browser requests multiple
topologies for the same report.
The filtering of endpoints causes some connections to get missed for
non-eBPF-tracked connections. Furthermore, the filtering of endpoints
is entirely pointless when the probes run eBPF since the filters just
pass through eBPF-tracked endpoints (for good reason too; because
otherwise some connections would be missed). So in that case it is
just costing CPU and removing it actually improves performance.
Note that removing the filtering does not result in over-counting
connections since that is done by source ip:port pairs.
Fixes#2551.
Fixes#2558.
ProcessRenderer was coloring connected nodes because we need that info
for rendering details panels. However, the main process topology view
renderers depending on ProcessRenderer were also doing coloring
themselves. For the 'processes' topology that was literally
duplicating work. For the 'processes-by-name' topology that was
throwing away the process coloring, and then coloring at the name
level.
Solution: remove the coloring from the ProcessRenderer, thus
eliminating the duplicate/thrown-away work, and introduce a
ColorConnectedProcessRenderer which is only used in places that
populate details panels.
the information is constant and already present in the id, so we can
extract it from there.
That reduces the report size and improves report encoding/decoding
performance. It should reduce memory usage too and improve report
merging performance too.
NB: Probes with this change are incompatible with old apps.
Based on work from Lorenzo, updated by Iago, Alban, Alessandro and
Michael.
This PR adds connection tracking using eBPF. This feature is not enabled by default.
For now, you can enable it by launching scope with the following command:
```
sudo ./scope launch --probe.ebpf.connections=true
```
This patch allows scope to get notified of every connection event,
without relying on the parsing of /proc/$pid/net/tcp{,6} and
/proc/$pid/fd/*, and therefore improve performance.
We vendor https://github.com/iovisor/gobpf in Scope to load the
pre-compiled ebpf program and https://github.com/weaveworks/tcptracer-bpf
to guess the offsets of the structures we need in the kernel. In this
way we don't need a different pre-compiled ebpf object file per kernel.
The pre-compiled ebpf program is included in the vendoring of
tcptracer-bpf.
The ebpf program uses kprobes/kretprobes on the following kernel functions:
- tcp_v4_connect
- tcp_v6_connect
- tcp_set_state
- inet_csk_accept
- tcp_close
It generates "connect", "accept" and "close" events containing the
connection tuple but also pid and netns.
Note: the IPv6 events are not supported in Scope and thus not passed on.
probe/endpoint/ebpf.go maintains the list of connections. Similarly to
conntrack, it also keeps the dead connections for one iteration in order
to report short-lived connections.
The code for parsing /proc/$pid/net/tcp{,6} and /proc/$pid/fd/* is still
there and still used at start-up because eBPF only brings us the events
and not the initial state. However, the /proc parsing for the initial
state is now done in foreground instead of background, via
newForegroundReader().
NAT resolution on connections from eBPF works in the same way as it did
on connections from /proc: by using conntrack. One of the two conntrack
instances is only started to get the initial state and then it is
stopped since eBPF detects short-lived connections.
The Scope Docker image size comparison:
- weaveworks/scope in current master: 22 MB (compressed), 68 MB
(uncompressed)
- weaveworks/scope with this patchset: 23 MB (compressed), 69 MB
(uncompressed)
Fixes#1168 (walking /proc to obtain connections is very expensive)
Fixes#1260 (Short-lived connections not tracked for containers in
shared networking namespaces)
Fixes#1962 (Port ebpf tracker to Go)
Fixes#1961 (Remove runtime kernel header dependency from ebpf tracker)
* Add filters for pseudo nodes.
- Don't filter the internet node as a pseudo node.
- Rename pseudo filter to unmanaged/uncontained.
- Review feedback
- Move the FilterFoo funcs into the tests
- Drop the 'nodes' from filter labels.
* Fix experimental