Whenever a deployment is deleted (with cascade=true) then the historic process instance (including variables etc.) gets also deleted.
The algorithm therefore selects for each process definition to delete the corresponding historic process instances in order to delete them. Then for each historic process instance all historic details (i.e. variable events) and historic variables are fetched from the database, thereby the referenced byte array is also fetched, if exists.
This means the engine fetches (almost) the complete existing history for a deployment to delete, which can lead to an OutOfMemoryError if the corresponding history is bigger.
- The engine should check whether the amount of data to be deleted is above a given threshold. If true, it throws a meaningful exception saying that the deployment cannot be deleted due to too much data
- Javadocs and REST documentation explain this