The vast majority of the cost is memory allocation, so doing a first
pass to see whether any upgrading is necessary at all, and thus
avoiding allocation when it isn't, is a massive saving.
Most maps we merge have the same keys, or at least one set of keys is
the subset of the other. Therefore, allocate a result slice capable of
holding only the max of number keys, rather than the sum.
so they can share code and are easier to run in combination.
We take advantage of the code sharing by generalising the report
rendering benchmarks to read & merge reports from a dir.
Pass in a slice on the stack instead of allocating one on the heap:
reduces garbage, hence makes the program run faster
Also apply knowledge that critbitgo will do an append() with one extra
byte, so we do that allocation up-front too. This is innocuous should
we stop using critbitgo or should its internals change.
We encode the zero time as an empty string, so we should do the
reverse when decoding.
Otherwise time.Parse() returns an error, which is created on the heap,
causing extra garbage-collection load.
golint has gained a new check and we don't freeze the golint version so CI was
failing on unrelated PRs:
app/multitenant/consul_client.go:69:2: redundant if ...; err != nil check, just return error instead.
report/marshal.go:188:2: redundant if ...; err != nil check, just return error instead.
Fix those!
scope-app:
* Adds `-app.metrics-graph` cli flag for configuring the base url to
use for graph links; supports :orgID and :query placeholders
* Assigns query URLs to existing metrics and appends empty metrics if missing
scope-ui:
* Extends <CloudFeature /> with option alwaysShow
* Adds <CloudLink /> to simplify routing when in cloud vs not in cloud
* Links metric graphs in the ui's node details view for all k8s
toplogies and containers so far
* Tracks metric graph click in mixpanel `scope.node.metric.click`
* Uses percentages and MB for CPU/Memory urls
* Passes timetravel timestamp to cortex in deeplink
This prevents cluttering host.LocalNetworks with lots of /32
addresses. These were unsightly and rather distracting in the UI. They
also bloated the report and slowed down server-side rendering.
Fixes#2748.
Final list of shapes for controllers view, with rationale:
Heptagon for Deployment - same shape as Service, which typically have a corresponding Deployment
Triangle for CronJob - weird shape for weird thing
Pentagon for DaemonSet - because pentagons and daemon worship :P
Octagon for StatefulSet - last shape left
While we're there, adopt a consistent ordering for all places that shapes are listed
Order is least sides to most sides, with circle before polygons, and complex shapes (currently just Cloud) after.
On shape choices for topologies:
* Since the k8s logo is a heptagon, we want pods to be heptagons.
* Since triangle is 'a bit weird', we put it on the least-important type, replica sets.
* Pentagons look a little weirder than octogons (it's the lack of symmetry) so we put octogons on the most common (deployments)
The rendering code checks whether endpoint IPs are part of
cluster-local networks. Due to the prevalence of endpoints - medium
sized reports can contain many thousands of endpoints - this is
performance critical. Alas the existing code performs the check via a
linear scan of a list of networks. That is slow when there are more
than a few, which will be the case in the context of k8s, since there
the probes register service IPs as local /32 networks.
Here we change representation of the set of networks to a prefix
tree (aka trie), which is well-suited for IP network membership checks
since networks are in fact a bitstring prefixes.
The specific representation is a crit-bit tree, but that choice was
purely based on implementation convenience - the chosen library is the
only one I could find that directly supports IP networks.
The rendering code checks whether endpoint IPs are part of
cluster-local networks. Due to the prevalence of endpoints - medium
sized reports can contain many thousands of endpoints - this is
performance critical. Alas the existing code performs the check via a
linear scan of a list of networks. That is slow when there are more
than a few. Unfortunately in some common k8s network setups, e.g. on
AWS, a cluster can contain hundreds of networks, due to /32 networks
derived from interfaces with multiple IPs.
Here we change representation of the set of networks to a prefix
tree (aka trie), which is well-suited for IP network membership checks
since networks are in fact a bitstring prefixes.
The specific representation is a crit-bit tree, but that choice was
purely based on implementation convenience - the chosen library is the
only one I could find that directly supports IP networks.
...by removing them. It was a ridiculous amount of contorted code to
test some utterly trivial functionality that is largely provided by
the golang stdlib.