-
Sub-task
-
Resolution: Done
-
L3 - Default
-
None
-
None
-
Not defined
Context:
See parent ticket:
It appears that in the import performance tests, the "expected counts" from the engine are incorrect, the engine appears to progress with some process instance data after we've taken the expected count, so once we compare our imported data with the expected counts, they don't match.
To avoid this mismatch, we should evaluate the expected counts on demand when we need them for the test instead of taking it from the metadata fields in the gcloud bucket.
AT:
- Add on demand evaluation of required counts to the import performance tests
- Remove no longer needed metadata fields (note, it might make sense to keep the process and decision instance count field as a reference?)