-
Task
-
Resolution: Done
-
L3 - Default
-
None
-
S
Context:
The issue is described in OPT-3558. There, we also added a quick fix where we can adjust the nested document limit. However, this does not solve the issue in a long term as it can have bad side effect like causing memory issues in Elasticsearch and slow down the queries quite significantly.
In our meeting from 22.10., we decided we would like to resolve this issue by splitting nested documents into their own dedicated indices. Further research (see spike tickets, also we will look into how Operate handles this limitation) is necessary before we can start implementing this solution. Until then, a compromise solution will be implemented that makes it more transparent to the user how to avoid this issue by adjusting the nested document limit in the config, see OPT-4463
AT:
- Optimize can handle more than 10 000 nested documents, especially:
- activities per process instance
- variables per process instance
Hint:
- we could also think about to have a totally different architecture for that, e.g. parent-child relationship, denormalize the documents
- Another option would be to split the data and do some pagination
- It could also be possible that Optimize throws a warning if we have more than 10 000 documents but then at least the import continues and omits all nested documents after that number