Due to AWS API rate limits, we need to minimize API calls as much as possible. Our stated objectives: * for all displayed tasks and services to have up-to-date metadata * for all tasks to map to services if able My approach here: * Tasks only contain immutable fields (that we care about). We cache tasks forever. We only DescribeTasks the first time we see a new task. * We attempt to match tasks to services with what info we have. Any "referenced" services, ie. a service with at least one matching task, needs to be updated to refresh changing data. * In the event that a task doesn't match any of the (updated) services, ie. a new service entirely needs to be found, we do a full list and detail of all services (we don't re-detail ones we just refreshed). * To avoid unbounded memory usage, we evict tasks and services from the cache after 1 minute without use. This should be long enough for things like temporary failures to be glossed over. This gives us exactly one call per task, and one call per referenced service per report, which is unavoidable to maintain fresh data. Expensive "describe all" service queries are kept to only when newly-referenced services appear, which should be rare. We could make a few very minor improvements here, such as trying to refresh unreferenced but known services before doing a list query, or getting details one by one when "describing all" and stopping when all matches have been found, but I believe these would produce very minor, if any, gains in number of calls while having an unjustifiable effect on latency since we wouldn't be able to do requests as concurrently. Speaking of which, this change has a minor performance impact. Even though we're now doing less calls, we can't do them as concurrently. Old code: concurrently: describe tasks (1 call) sequentially: list services (1 call) describe services (N calls concurrently) Assuming full concurrency, total latency: 2 end-to-end calls New code (worst case): sequentially: describe tasks (1 call) describe services (N calls concurrently) list services (1 call) describe services (N calls concurrently) Assuming full concurrency, total latency: 4 end-to-end calls In practical terms, I don't expect this to matter.
Weave Scope - Monitoring, visualisation & management for Docker & Kubernetes
Weave Scope automatically generates a map of your application, enabling you to intuitively understand, monitor, and control your containerized, microservices based application.
Understand your Docker containers in real-time
Choose an overview of your container infrastructure, or focus on a specific microservice. Easily identify and correct issues to ensure the stability and performance of your containerized applications.
Contextual details and deep linking
View contextual metrics, tags and metadata for your containers. Effortlessly navigate between processes inside your container to hosts your containers run on, arranged in expandable, sortable tables. Easily to find the container using the most CPU or memory for a given host or service.
Interact with and manage containers
Interact with your containers directly: pause, restart and stop containers. Launch a command line. All without leaving the scope browser window.
Getting started
sudo curl -L git.io/scope -o /usr/local/bin/scope
sudo chmod a+x /usr/local/bin/scope
scope launch
This script will download and run a recent Scope image from the Docker Hub.
Now, open your web browser to http://localhost:4040. (If you're using
boot2docker, replace localhost with the output of boot2docker ip.)
For instructions on installing Scope on Kubernetes, DCOS or ECS, see the docs.
Getting help
If you have any questions about, feedback for or problem with Scope we invite you to:
- Read the docs.
- join our public slack channel
- send an email to weave-users@weave.works
- file an issue
Your feedback is always welcome!