As it is an initial implementation, it only controls latency of the
outgoing (egress) traffic. There is also a TODO to turn this plugin
into something more serious. Also, at some point we may consider
moving this plugin outside of "example" directory.
We will use this code to execute the code in some process' network
namespace.
I did the vendoring a bit differently, as gvt seems to be a bit dumb
about getting dependencies for test packages (it tried to vendor
ginkgo and gomega, since cni tests are using it).
Also, instead of vendoring golang.org/x/sys as
github.com/containernetworking/cni/vendor/golang.org/x/sys I moved it
to scope's vendor directory.
There were two problems:
- the renderer was looking for reverse names on the destination
- the probe was not annotating source nodes with reverse-resolved names
Fixes#1847
For counting we were using a table keyed on a struct containing Node
pointers. For connections between ordinary nodes this works just
fine. But for connections to/from the Internet node we want to track
individual address/port combinations, which involves an extra
lookup. Since our data structures generally contain values, not
pointers, this produces aliases. As a result n connections from/to a
node to/from a specific Internet IP+port would result in n rows in the
count table, each with a count of 1, instead of one row with a count of
n.
Things wouldn't be so bad if it was actually rendered like that -
annoying, but at least accurate - but...
Each row has an ID which is computed from the node IDs and ports. Not
Node references. The ID must be unique - the frontend will only
render *one* thing per ID. Since the row IDs of our n rows are all the
same, we see one row with a count of 1 instead of n rows with a count of
1.
Furthermore, since the frontend's table row limiting is counting rows,
not unique row IDs, a) fewer rows would be rendered than expected, and
b) the displayed count of the number of extra rows would be too high.
The fix is to replace the Node pointers in the key with Node IDs. This
does require an extra table lookup when we come to produce the rows, but
that is just a fairly cheap map lookup.
Fixes#1495.
Applies tools changes from:
commit 7f46b90e27
Author: Krzesimir Nowak <krzesimir@kinvolk.io>
Date: Thu Jul 21 11:55:08 2016 +0200
Make LatestMap "generic"
This commit makes the LatestMap type a sort of a base class, that
should not be used directly. This also adds a generator for LatestMap
"concrete" types with a specific data type in the LatestEntry.
commit 0f1cb82084
Author: Krzesimir Nowak <krzesimir@kinvolk.io>
Date: Thu Jul 28 14:26:08 2016 +0200
Allow testing only a subset of directories
This can be done by calling TESTDIRS="./report ./probe" make tests
commit 97eb8d033d
Author: Krzesimir Nowak <krzesimir@kinvolk.io>
Date: Thu Aug 4 11:44:21 2016 +0200
Do not spell check Makefiles and JSON files
Spell check does not handle JSON files very well. It finds misspelled
words in hexadecimal hashes (like misspelled "dead" in "123daed456").
Ignore Makefiles too - there is not so much free text to spell check
and the spell check complains about words it was told by Makefile to
The host CPU metric was reported as a percentage of all available CPUs,
but the limit was set to n_cpus * 100%. So on a 4-core machine the
graphs and metrics-on-canvas would never show more than 1/4th usage. Now
the limit is set to 100%.
Fixes#1664.
The container CPU metric was reported in units of 100% = 1 CPU. So the
the ratio was correct, but since we don't show limits in most places it
is hard to interpret that figure. It also makes sorting by CPU usage
highly misleading. So now we normalise everything to 100%. That too can
be misleading, depending on what you are looking for, but it's generally
less surprising.