-
Task
-
Resolution: Fixed
-
L3 - Default
-
None
-
None
When having a cluster of process engines, they will. currently send the same metrics data they obtain from the database. This will make the data meaningless on ET side.
AT:
- Change the implementation to only send metrics that were created on the current engine (and collected in main memory)
- It is okay that metrics may be lost when the engine shuts down unnormally
Other solution options:
- Build a mechanism that ensures only one reporter in a cluster sends the metrics
- Very hard to get right, taking into account that engines can start and stop at any time
- Use the reporter id for the metrics query, sending only metrics that were created with the id of the current engine
- It is possible that two engines use the same reporter id. In that case, we would again count metrics multiple times. We decided it is better to have a value lower than the true metrics instead of a higher value.
- Send information with the metrics that allows to de-duplicate the metrics on Kibana side
- Examples:
- Send a unique reporter id that is unique per reporter, so that on Kibana side we use the metrics from only one reporter. Problem: The reporter id does not remain stable over engine restarts.
- Send a time window for which the metrics were collected. On Kibana side we use only one data point per time window. Problem: Time windows of multiple reporters will not be exactly the same but overlap. Then, it is impossible to properly de-duplicate the data.
- Examples:
This is the controller panel for Smart Panels app
- is depended on by
-
CAM-11952 ET: Know amount of active Camunda projects[versions] and the technical environments they are used in
- Closed