mirror of
https://github.com/NetherlandsForensicInstitute/hansken-extraction-plugin-sdk-documentation.git
synced 2026-02-14 14:09:49 +00:00
28 lines
1.3 KiB
Plaintext
28 lines
1.3 KiB
Plaintext
# Kubernetes, Autoscaling, Resourcemanagement
|
|
|
|
Extraction Plugins can be run in a Kubernetes cluster. This can be the same cluster Hansken run on, or another external
|
|
cluster. For each plugin, a *pod* is created by Hansken. Each plugin will have 12 threads by default to process traces
|
|
separately within one pod.
|
|
|
|
## Autoscaling
|
|
|
|
Hansken will create a Horizontal Pod Autoscaler (*HPA*) for each pod. HPA's manage the number of replica's of a pod
|
|
based on metrics. The Extraction Plugins SDK provides two metrics to be set in the PluginInfo:
|
|
|
|
* Observed CPU utilization
|
|
* Observed memory usage
|
|
|
|
For more info on how to set these metrics follow these links:
|
|
|
|
* [Specifying system resources (Java)](../java/snippets.md#Specifying system resources)
|
|
* [Specifying system resources (Python)](../python/snippets.md#Specifying system resources)
|
|
|
|
If a pod reaches the CPU or memory usage provided with the PluginInfo, the HPA will increase the number of replicas for
|
|
that plugin. Scaling down is done automatically. The maximum number of replicas per pod is specified within Hansken
|
|
properties and can be adapted by an operator (needs restart).
|
|
|
|
## Finding the right settings
|
|
|
|
This depends on the kubernetes cluster settings, the nodes it runs on, and of course the plugin itself. Monitor the
|
|
number of replica's and resource metrics while an extraction is running and adapt accordingly.
|