- the skipHistoryOptimisticLockingExceptions configuration flag is set to (the default value) true in order to suppress OptimisticLockingExceptions for Historic Entities
- JDBC batch processing is used
- Two jobs are executed in concurrent transactions
- Both jobs try to create the same process variable (runtime)
- In one of the transactions this causes a constraint violation, which the engine treats as an optimistic locking case, because it indicates a problem with parallel modification
- The engine tries to determine if it should throw OptimisticLockingException or not (taking skipHistoryOptimisticLockingExceptions into account)
- OptimisticLockingException is incorrectly not thrown.
- The remainder of the JDBC batch is not retried
- The rest of the database flush continues.
- The transaction commits
- The jobs are not removed from the database, leading to an inconsistency in database state
- OptimisticLockingException is thrown and the transaction rolls back
- This requires hasOptimisticLockingException method (determines the failed db operation) to return the correct failing operation in case of JDBC batching
- The problem is replicated with the unit test here: https://github.com/koevskinikola/camunda-engine-unittest/tree/SUPPORT-5595
- The hasOptimisticLockingException method uses batchExecutorException.getSuccessfulBatchResults().size(); to determine the failed operation index, which isn't the actual failed operation.