- Refactor dotted edge logic.
- Change Storage view to show storage components as well as all
the pods.
- Sentence case storage related variables.
Signed-off-by: Satyam Zode <satyam.zode@openebs.io>
This will:
- Add StorageClass resource. Storage classes are mentioned
in the PVC spec. We're using storage class name from PVC spec to
add adjacency to the PVC node.
- Add square sheet shape for StorageClass.
- Add storage filter in the PODS topology.
Storage Filter will allow user to see distinct view of
stateful applications.
- Add visually distinct edge to show storage adjacency.
Signed-off-by: Satyam Zode <satyam.zode@openebs.io>
This will
- Add Kubernetes volume resources such as PV, PVC.
- Add shapes for Kubernetes PV and PVC
- Add `Cylinder` shape for PV and `Dotted Cylinder` shape for PVC.
Signed-off-by: Satyam Zode <satyam.zode@openebs.io>
* Sentence cased text everywhere
Follows Weave Cloud's direction of sentence case on most things.
* More space between sorter caret and label
* Use full topology name for table header
Upgraded from 99c19923, branch release-3.0.
This required fetching or upgrading the following:
* k8s.io/api to kubernetes-1.9.1
* k8s.io/apimachinery to kubernetes-1.9.1
* github.com/juju/ratelimit to 1.0.1
* github.com/spf13/pflag to 4c012f6d
Also, update Scope's imports/function calls to be compatible with the new client.
It's always set to render.FilterUnconnectedPseudo, so we can simply
use that constant in the one function (RendererForTopology) that
looked at that value.
In many cases we only need to know whether there are _any_ connected
probes, and not the probe details. Obtaining that info is cheaper
since it requires no reading or merging or reports.
This requires no report reading / merging.
We plan to expose this in the HTTP API, so the UI gets a cheap way of
checking whether the app is currently receiving data from probes.
The vast majority of the cost is memory allocation, so doing a first
pass to see whether any upgrading is necessary at all, and thus
avoiding allocation when it isn't, is a massive saving.
so they can share code and are easier to run in combination.
We take advantage of the code sharing by generalising the report
rendering benchmarks to read & merge reports from a dir.
...rather than before. That way, nodes which become unconnected during
filtering are removed, which is what we want. ATM we are depending on
some 'unconnected' filtering inside every filter, which is expensive
and largely redundant. We should soon be able to remove that.
downside: 'unconnected' filtering is no longer memoised.