Uploaded image for project: 'Camunda Optimize'
  1. Camunda Optimize
  2. OPT-3472

Evaluate all elasticsearch aggregations prone to hitting bucket size limit

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Fixed
    • Icon: L3 - Default L3 - Default
    • 3.1.0-alpha2, 3.1.0
    • None
    • backend
    • None

      Context:
      With OPT-3428 we addressed a potential bucket size limit (default 10k) error for definition queries. However, this limit can be hit for any unbounded bucket aggregation.

      We need to investigate which of our other aggregations are prone to exceeding the limit as well (e.g. group by assignee). Theoretically, all unbounded aggregations are. And decide case by case how likely and whether we should prevent it using e.g. composite aggregations.
      We may also consider limiting the aggregation size manually to prevent server errors with the limitation of just returning an incomplete result

      Note:
      What makes this even more complicated is that the limit seems to apply to not one layer of aggregations but to the whole number of buckets for all nested aggregations.

      AT:

      • there is a basic helper class available that assists with scrolling through the composite aggregation
      • the events count endpoint uses the composite aggregation
      • the duration outlier analysis uses the composite aggregation

      Known additional Affected Queries/Endpoints:

      • the variables endpoint may hit this issue for the rather unlikely case of 10k different variables being present for one process -> OPT-3612
      • process parts aggregation is likely to hit the limit if there are more than 10 000 process instances in the filter -> OPT-3613

        This is the controller panel for Smart Panels app

              Unassigned Unassigned
              sebastian.bathke Sebastian Bathke
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

                Created:
                Updated:
                Resolved: